• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Recent Advances on Human-Computer Dialogue

    2016-03-20 06:51:36XiaojieWangCaixiaYuan

    Xiaojie Wang*,Caixia Yuan

    Center for Intelligence Science and Technology,School of Computer Science,Beijing University of Posts and Telecommunications,Beijing,100876,China

    Recent Advances on Human-Computer Dialogue

    Xiaojie Wang*,Caixia Yuan

    Center for Intelligence Science and Technology,School of Computer Science,Beijing University of Posts and Telecommunications,Beijing,100876,China

    Human-Computer dialogue systems provide a natural language based interface between human and computers.They are widely demanded in network information services,intelligent accompanying robots,and so on.A Human-Computer dialogue system typically consists of three parts, namely Natural Language Understanding(NLU),Dialogue Management(DM)and Natural Language Generation(NLG).Each part has several different subtasks.Each subtask has been received lots of attentions,many improvements have been achieved on each subtask,respectively.But systems built in traditional pipeline way,where different subtasks are assembled sequently,suffered from some problems such as error accumulation and expanding,domain transferring.Therefore,researches on jointly modeling several subtasks in one partor cross different parts have been prompted greatly in recent years,especially the rapid developments on deep neural networks based joint models.There is even a few work aiming to integrate all subtasks of a dialogue system in a single model,namely end-to-end models.This paper introduces two basic frames of current dialogue systems and gives a brief survey on recent advances on variety subtasks at first,and then focuses on joint models for multiple subtasks of dialogues.We review several different joint models including integration of several subtasks inside NLU or NLG,jointly modeling cross NLG and DM,and jointly modeling through NLU,DM and NLG.Both advantages and problems of those joint models are discussed.We consider that the joint models,or end-to-end models,will be one important trend for developing Human-Computer dialogue systems.

    Human-Computer dialogue system;Natural Language Understanding;Dialogue Management;Natural Language Generation;Joint model

    1.Introduction

    A Human-Computer dialogue system is a conversation agent.It provides an interface to help human utilize computers by talking with computers in natural language.It is also called spoken dialogue systems since dialogues are often in spoken language.

    Dialogue systems are normally divided into two classes. One is goal-driven dialogue systems,another is non-goal driven systems.In the beginning of 1990s,MIT developed an automatic fl ight booking system[1]under the supports of DARPA.It is a typical goal-driven dialogue system.The system analyzed the user utterances and managed multiple turns of dialogue processes in order to extract all necessary information for flights booking,such as departure city, departure time,and so on.Similar systems include telecommunication service system HMIHY[2],climate information query system JUPITER[3],travel plan making system DARPA Communicator[4]and so on.Goal-driven dialogue system is also named task-oriented system.

    Non-goal-driven dialogue systems are another class of systems.They usually response user utterances without any specific goal.They are also called chatterbots.ELIZA[5] might be the first chatterbot.Chatterbots have become more and more popular in recent years.Some typical personnel assistant systems,such as Cortana,Google Now,include chatterbots inside.

    The border between goal-driven dialogues and non-goal driven dialogues is not strict.They are often mixed in real world dialogues.For example,we occasionally chat with human service staffs when we book a fl ight.A good personnel assistant system therefore should sometimes be able to chatwith users for a non-goal-driven dialogue,and sometimes help people book tickets in a goal-driven way.

    Recently,there is a rapidly increasing demands on both customer service robots and personnel assistants.The former helps to reduce the costs of customer service for enterprise,the later helps to get information services in more natural way. Lots of researches have been done on Human-Computer dialogue systems.Greatimprovements on models,algorithms and performances of dialogue systems have been achieved in past decades.Recent years,with the help of rapidly increasing computational capability,more dialogue corpora publicly available as well as some new technologies like deep neural networks,there are also some new trends in the developments of dialogue systems.It is the time to review the variety advances on these topics,especially on the new trends.Limited to the space,this paper will not review all the aspects of Human-Computer dialogue systems.We paid more attentions on goal-driven dialogue,especially on one important recent trend,namely the jointly modeling of multiple tasks in dialogue systems.

    The remainders of this paper are organized as follows: Section 2 revisits the architect of goal-driven dialogue systems.A brief survey on each subtask in dialogue systems is given in Section 3.Section 4 is the focus of the paper.Recent research advances on jointly modeling multiple subtasks in dialogues are introduced and discussed,where both goaldriven dialogues and non-goal-driven dialogues are included. We draw conclusions in Section 5.

    2.Frames of goal-driven dialogue systems

    Fig.1[8]is a basic frame of goal-driven dialogue systems. It includes three parts,Natural Language Understanding (NLU),Dialogue Management(DM),and Natural Language Generation(NLG).The user inputs can be either speech or text.If the inputs are speech(NLU is often called Spoken Language Understanding(SLU)in this case),then an Automatic Speech Recognition(ASR)module should be included in SLU,it might be an independent module before NLU as well.When the system outputs are speech,NLG should include a Text-To-Speech(TTS)module.

    NLU(or SLU)aims to extract task related information from user utterances.For example,in a flight booking dialogue,task related information includes the departure city, destination city,time for fl ying and so on.When NLU (including ASR)is assumed to be able to extracts allnecessary information for the task from user inputs correctly at each dialogue turn,the state of dialogue is said to be(fully) observable.

    Fig.1.A basic frame for dialogue system when states are fully observable.

    DM controls the process of a dialogue.Markov decision process(MDP)is widely used for modeling DM for a dialogue with observable states.It accepts the output of NLU at time t (and updates information in a frame which records the goal of the task accordingly)as state at time t which is denote byst, then outputs an act,denoted byatin Fig.1,and receives a rewardrtat the same time.Let us continue to take the fl ight booking dialogue as an example.When DM obtains the information of departure city from MLU(and fill it in the goal frame of the task),it should decide its next act to do.For example,to confirm the departure city,or to ask user departure time if it is unknown,or to do both of them in next sentence. DM selects the optimal act sequence to maximum long-term reward.

    NLG transfers the acts of DM into natural language sentences.For example,if DM decides to ask user destination city.NLG will transfer the decision to a sentence like“please tell me your destination city”.

    The state of dialogue shown in Fig.1 is assumed fully observable at each time.It therefore cannot take the uncertainty of NLU into consideration.But the uncertainty of NLU is practically unavoidable due to current performances of ASR and NLU.It should be covered in a workable system.Therefore,frame like Fig.2[6]has received major attentions currently.Where,A Partial Observable Markov Decision Process(POMDP)model is used for DM.The outputs of NLU,denoted byotin Fig.2,is observations of dialogues att. The state is a probabilistic function of outputs.Belief state, which keeps a distribution of the states,is used to cover the uncertainty of NLU instead of states themselves,it is denoted bybtin Fig.2.

    As shown in Fig.2,NLU can be further divided into domain identification,intent identification(dialogue act classification)and semantic labeling(it is often called slot filling which means to labelslotnames for words in sentences,words labeled by slot names are the value of those slots.Slot names are usually task specific semantic tags,like departure city, destination).DM includes state tracking(dialogue model)and act generation(policy model).NLG includes sentence planning and surface realization.

    Basing on this frame,lots of work has been done on each subtask in NLU,DM and NLG,a brief review is given as follows.

    3.Modeling each subtask separately

    Human-Computer dialogue systems are traditionally constructed in a pipeline style.Each subtask is modeled separately as an independent module,and then assembled sequently according to Figs.1 or 2.Lots of work has been done on each subtask to improve their performances,respectively.Since itis notthe focus of this paper,we only give a brief review on them in this section.For a more comprehensive survey ontraditionally dialogue systems,especially on POMDP based pipeline dialogue systems,please read the excellentreviews by Young,Gasic,&Thomson,et al.[6]and Yu,Chen,&Chen, et al.[7].

    Fig.2.A basic Frame for Human-Computer dialogue systems when states are partially observable.

    3.1.Subtasks in NLU

    NLU provides dialogue information for DM.It includes domain identification,intent identification,and semantic parsing.For a user input“I want to book a flight to Beijing”, its domain is flight service,the user intent is to book a fl ight, Beijing is the value for destination slot.

    Both domain identification and intent identification are often modeled as classification.Lots of supervised classifiers, including Support Vector Machine(SVM)[9],Maximum Entropy(ME)[10],Deep Neural Network(DNN)[11]were used for identifying user intent.More than 90%of accuracy had been achieved in some dialogue corpora.However,supervised methods suffered from the lack of large number of labeled data,especially in new domains.Some unsupervised methods were proposed[12].But,the performances of current unsupervised methods were more than 10%lower than those in supervised methods[13].

    Slot filling was often considered as one kind of sequence labeling for words in sentence,where the label set included all slot names designed for the specific task.Both supervised methods and unsupervised methods were employed on the subtask.Conditional Random Field(CRF)and Recurrent Neural Network(RNN)were the mostly used models. Experimental results on ATIS data[14,15]showed that RNN or its variants performed better than CRF,the labeling accuracy exceed 90%.But Vukotic,Raymond,&Gravier,et al. [16]showed that there was no significant difference between different models on ATIS data.Since ATIS data was simple, all methods achieved good performance.While on MEDIA data which was more challenging,CRF performed better than RNN.

    Similar to that in intent identification,performances achieved by unsupervised methods on slot filling[18,19,21]were lower than those in supervised methods.Ref.[17]was one of the earliest works on unsupervised slot filling.They made use of task-related query logs for slot filling.Tur,Celikyilmaz,& Hakkani-Tur[18]made us of Latent Dirichlet Allocation (LDA)model to improve slot filling.Chen et al.published a series work on unsupervised slot filling.Chen,Wang,& Rudnicky[20]proposed an unsupervised slot induce by combining Frame-Net and distributed representation.Chen, Wang,&Gershman,et al.[19,21]learned latent features by matrix factorization,Chen,Wang,&Rudnicky[22]considered knowledge graph based random walk for slot induction.

    3.2.Subtasks in DM

    Most of current researches on DM tried to manage dialogue with partial observable states.In this case,POMDP is the most popular frame for DM which consisted of state tracking and act or policy generation.Lots of work has been done separately on the two subtasks.

    Researches on state tracking were primarily under the frame of generation models.But POMDP based belief state update was especially difficult when the task is complex.N-best approximate,factorization approximate and other such kind of methods were therefore proposed.Williams[23] pointed out some weakness based on experimental results on two publicly available dialogue systems,and suggested that discriminative models could achieve better performance for belief state update.Williams,Raux,Ramachandran,etal.[24] organized the 1st Dialog State Tracking Challenge(DSTC), the challenge was held each year[25,26],strongly promoted research on dialogue state tracking.Lots of discriminativemodels including structure discriminative model[27],multi domain learning model[28],deep neural network[29]and some models combining rules and statistics[30]were proposed on DSTC and achieved performance improvement.

    Act generation builds mappings from belief states to system acts.Reinforcement learning is current mainstream technology.Because the real-world problems normally have large scale belief state space,current methods used compressed representation of belief space,such as abstract space,and then made use of approximate methods to learn mappings.Typical methods included Monte Carlo Optimization,Least-squares Policy Iteration,Natural Actor-Critic Optimization,Q-learning,Sarsa and neural network[6].By defining kernel functions related to objective functions,Gaussian Process reinforcement learning[31,32]was shown to be able to reduce the samples needed for learning,and accelerated the optimization of policies.Itmade possible to do act generation in the whole belief state space.Barlier,Perolat,&Laroche,et al. [33]proposed a stochastic game frame for policy learning. They considered several limitations of(PO)MDP.The first one is that the policy is assumed to be stationary in POMDP,that means users do not change the policy through the task. Another is the assumption of cooperation,where all participants of dialogues have same goal in the task.Both of the assumptions are not always true in real world dialogues.For example,participants in a negotiation might have different goal,and could change their policy.Barlier,Perolat,&Laroche,et al.[33]argued that stochastic game was able to deal with both of the limitations.

    3.3.Subtasks in NLG

    NLG receives dialogue acts from DM as“what to say”,and implements“how to say”by sentence planning and surface realization.

    Sentence planning aggregates content to be described into a sentence by lexicalization and referring expression generation. Lexicalization means to choose words to describe concepts and their relationships.Referring expression generation means to select words or phrases to refer entities.Reiter&Dale[34] found there are stable correspondences between semantic representations and sentence structures,and used schema to describe sentence structures.A natural language generator selected schema and filled in schema according to inputs,and got the final sentences.For example,given a representation of“what to say”:“((obj-action fly)(obj-name flight)(attribute time)(attribute-value 10 am))”.Schema“<o(jì)bj-name>fly at<attribute-value>”was selected for generating sentence“flightfly at10 am”.Stone&Doran[35]used Tree-Adjoining Grammar(TAG)to represent syntactic structure,and used rhetorical structure to gain local coherence of discourse.The leaf nodes of TAG elementary tree could be used for surface realization.Dusek&Jurcicek[36]proposed an algorithm to construct sentence plan tree candidates increasingly from an empty tree basing on A*search with a perceptron ranker,and rank the trees.It finally got a sentence frame according to semantic consistence with inputs.

    Surface realization selects function words,word forms and so on,to make the outputs syntactically well-defined,and spelling correctly.Lavoie&Rambow[37]proposed a series of lexicalization process,such as function word insertion,linearization of the tree representation,morphological inflection, and surface formatting(e.g.,adding punctuation).White& Baldridge[38]used Combinatory Categorical Grammar and a bottom-up chart-based decoder,along with argument types, lexical categories and so on to make sure the sentence is generated grammatically.

    We can see from above brief survey that there are lots of advances in each subtask.But,there are also lots of spaces to be improved,especially for partially observable model,goaldriven dialogue systems were only applied in some small scale tasks.Young,Gasic,&Thomson,etal.[6]and Yu,Chen, &Chen,etal.[7]have given excellent analysis and survey on the problems of each subtask.

    4.Jointly modeling multiple subtasks

    Traditionally,dialogue systems were built in a pipeline way.Models for each subtask were built separately and then assembled into a whole system.Pipeline systems are conceptually clear.Each part focuses on its own problems independently,each model is developed independently.But there are also some limitations for pipeline systems.

    Firstly,it cannot make use of the interaction information between different parts.There are significant interactions between each subtask,the interactions are helpfulto improve the system performance.Taking the intent identification and slot filling in NLU as an example,slot filling is helpful to intent identification,and vice versa.In a flight booking task,if only the destination slot is labeled in a sentence,then the probability that intent of the sentence is to tell the destination is big, on the contrary,if intent of a sentence is to tell the departure city,then,a departure city will occur in the sentence with a big probability.If the interactions between two subtasks can be modeled properly,it should be helpful to promote both tasks. There are similar situations for other subtasks.

    Secondly,models for each subtask are trained separately in a pipeline system.It brings difficulties from two sides.On the one hand,developers of dialogue systems usually only get feedback from the end users,who inform them about final performance of the systems.It is difficult to back propagate or assign final error signals of the system to each subtask.It is also time-consuming and laborious to get labeled data for each subtask.On the other hand,because it is difficult or impossible to ensure fully correct in each subtask,errors in previous subtasks might hurt later subtasks.The errors might be accumulated and enlarged through the pipeline,even become uncontrollable.

    Thirdly,interdependences of subtasks in dialogue systems make online adaptation of systems challenging.For example, when one module(e.g.NLU)is retrained with new data,all the others(e.g DM)that depend on it become sub-optimal due to the fact that they were trained on the output distributions of the older version of the NLU module.Although the idealsolution is to retrain the entire pipeline to ensure global optimality,this requires significant human effort[39].

    Recent advances are exploring how to overcome above limitations of pipeline system.Jointly modeling has been proven to be an efficient way.There are lots of work on joint models,range from jointly modeling subtasks in NLU,DM or NLG respectively,to jointly modeling subtasks cross NLU and DM,and even jointly modeling cross NLU,DM and NLG.

    Here,“joint model”or“jointly modeling”means two or more subtasks are implemented in a single model or in a strongly coupled frame,the model(or frame)is trained as a whole or simultaneously instead of subtask by subtask.

    4.1.Jointly modeling subtasks in NLU

    In recent years,with the success of deep learning in variety applications,several different types of deep neural networks have been used on jointly modeling of NLU subtasks.

    Xu,&Sarikaya[40]described a joint model for intent detection and slot filling based on convolutional neural networks(CNN).The features were extracted through CNN layers and shared by the two tasks.Experimental results on ATIS corpus and other 4 unpublished datasets showed that the joint model outperforms triangular CRF on both intent identification and slot filling.

    Guo,Tur,&Yih et al.[41]proposed a RecNN(Recursive Neural Networks)based approach to jointly model domain identification,intent identification and sematic parsing. Compared with some previous methods which modeled the three tasks separately,their model achieved competitive performance on ATIS data and Microsoft Cortana dialogue data.

    Shi,Yao,&Chen,et al.[42]proposed a RNN to jointly modelthree subtasks in NLU,and received better performance than previous methods.LSTM(Long Short-Term Memory) was also use for jointly modeling intent identification and sematic parsing[43].It achieved better performance on DSTC2 data than those in separate models.

    Besides deep neural networks,traditional CRF were also used as joint models.Lee&Ko[44]proposed a CRF based new tag addition method to joint model different subtasks.The model added three positions for speech act,operator and target,respectively before each input sentence.A single CRF model was used to label Named Entities in the sentence as well as the three labels.Besides,models combining CRF with CNN were also used for joint modeling.All of them achieved better or competitive performance than current state-of-the-art independent models.

    Jointly modeling subtasks has been shown a promising way for improving NLU.Although some improvements have been achieved.There also some problems to be made clear or solved.Among them,we think following three problems are even important and interesting.

    The first problem is about how to joint model multiple-task. There were two ways for joint modeling multiple-task in previous methods.One was to transfer different tasks in a same type of problem,then model them in a single model.The typical one is Lee&Ko[44].They transferred intent identification(a normal classification task)into a labeling task by adding a tag position for intent before the sentence.A single labeling model could then be used for intent and slot labeling simultaneously.It might be though as a flat and parallel mode.Another way was to build hierarchical models for classification and labeling separately in different layers.Both Shi,Yao,&Chen,et al.[42]and Zhou,Wen,&Wang[43] employed a hierarchical neural network model for intent classification and slot labeling.While the former put slot labeling in the bottom of hierarchical network,intent identification on the top.The later tried two different arrangements (one was exactly same as that in former,another is inverse) and found the subtask on the top of the network always gained more benefits from the hierarchical structure,no matter which subtasks were put on the top.It was not clear which kind of joint way is better for given subtasks,there is still no full investigation on this problem.

    Almost all current joint models were supervised.They needed labeled data for all subtasks.For deep neural network models,they demanded a large amount of data for better performance.So,the second problem is how to get a large number of labeled data,or should we pursue some unsupervised approach?As we have seen,unsupervised models significantly performed worse than supervised models in single subtasks.There is still no unsupervised approach for joint models.Could jointtasks find better unsupervised models than that in single task by utilizing the interaction information between two or more subtasks?If it is possible,the jointly models gain another important advantage compared with pipeline models.

    Another problem is domain adaption.It is expensive to build a large number of labeled data.It is even expensive to build a large number of labeled data for each domain.How can we reuse the labeled data in one domain on another domain? We have to deal with new words,new intent,new slot values or even new slots for dialogues in new domains.There are some beginning works on dealing with the problem.For example,Yazdani,&Henderson[45]explored a zero-shot representation learning model for SLU in new dialogue domains.They integrated intents(acts)and slots in a label representation learning model,different domains used common parameters of word embeddings.The experimental results showed that the word vector based model could adapt well to new domains.We will see in next section that word based models could also be a possible way for cross-domain adaption in other joint models.

    4.2.Jointly modeling subtasks cross NLU and DM

    Normally,DM receives semantic labels of a sentence from NLU as inputs.Some recent work has crossed the gap,and uses the sentence as input of DM directly.

    Henderson,Thomson,&Young[46]proposed a word based RNN modelfor state tracking.The model mapped the ngrams of user inputs to dialog states without using an explicit semantic decoder.Each slot was dealt with a separate RNN model.The method was evaluated on the second Dialog StateTracking Challenge(DSTC2)corpus and the results demonstrated consistently higher performances compared with pipeline models.Mrksic&Kadlex,et al.[47]proposed a multi-domain state tracking model basing on work proposed in Ref.[46].The results showed the model could achieve good performance when combined with some delexicalized features.

    Reinforcement Learning(RL)was a major tool for policy modeling.Most of current joint models including act generation employed Deep Reinforcement Learning(DRL)which was first proposed in Ref.[48]for playing computer games.

    Mnih,Kavukcuoglu,&Silver,etal.[48,49]implemented a screen based game playing agent.The agent selected game actions according to screen images.They proposed a deep Q-learning algorithm on a Deep Q-Network(DQN)with two layers of convolutional network and two layers of full connection forward network for learning Q-function.A mapping from image inputs to game acts was learned.By utilizing DRL,screen understanding was integrated with game operation selection into an end-to-end model.The model achieved better or competitive scores in a number of different games compared with human players.In fact,game playing is very similar to dialogue.Images of screen are analogy to utterances of users,game operators are analogy to dialogue actions.The goal of game agent is to achieve maximum long-term rewards in multiple turns,which is also analogy to the goal in dialogues.The only difference between games and dialogues is: the inputs of games are continuous images,while the inputs of dialogues are discrete language symbols.Narasimhan,Kulkarni,&Barzilay[50,53]proposed a LSTM-DQN model for text based network games,where LSTM was used to decode text inputs into a vector representation which was then fed to a DNN to train a Q-function.It achieved better performance than some previous models.

    Due to the great successes in computer games and similarities between games and dialogues,DRL was then rapidly used for building end-to-end joint models for dialogue systems.

    Cuayahuitl&Keizer[51]used deep reinforcement learning on a non-cooperated dialogue to generate dialogue policies, they implemented experiments on a card game instead of dialogues.Cuayahuitl[52]tried to construct a joint model from the outputs of ASR to act generation.He used DRL in Cuayahuitl&Keizer[51]for DM.But they just showed some simple DRL results without performance evaluation of the dialogue system.Zhao&Eskenazi[39]jointly modeled state tracking and action generation in a deep reinforcement learning frame. LSTM was used to track history of dialogue.They also proposed a model with supervised information from dialogue state.

    Dialogue states were manually designed in past dialogue systems.The design of dialogue states was subjective and time-consuming.DRL provided an efficient way to avoid explicit design of the dialogue states.But it was not so easy to train Q-function networks like DQN or LSTM-DQN.The samples fed to the network was(st,at,rt,st+1),t=1,2,…N or something like that.They were not independent identically distributed(i.i.d.)becausest+1(state at time t+1)was determined by bothstandat.The Q-function networks were therefore prone to oscillation and difficult to converge.For training the DQN,Mnih,Kavukcuoglu,&Silver,et al.[48] used an experience replay mechanism proposed by Lin[54] which randomly sampled previous transitions,and thereby smoothed the training distribution over many past behaviors. Recently,Hasselt,Guez,&Silver[55]leveraged the overestimation problem of standard Q-Learning by introducing double DQN,Schaul,Quan,&Antonoglou,et al.[56] improved the convergence speed of DQN via prioritized experience replay.Although these measures were workable to some extents,and have helped DRL to achieve great successes in computer game.But there was no general guarantee for convergence of DRL.Ma&Wang[57]showed that the Q-function networks could converge well when a dialogue has a small act space,but the situation became worse with the increase of act space of the dialogue.How to train Q-function networks will be still a problem in near future.

    4.3.Jointly modeling subtasks in NLG

    Lots of work has been done on jointly modeling content selection,sentence planning and surface realization in recent years.These methods mapped acts(semantic frames or meaning representation)from DM to natural language sentences directly,and could mainly be divided into two types:one was syntax based models,another was sequence based models.

    Syntax based models took sentence generation as an inverse process of sentence understanding.They adopted syntaxes similar to those in syntactical analysis,generated natural language sentences by rewriting syntactical symbols or semantic symbols continuously tillthe leave nodes(words)of syntactical tree.Most of pervious work has focused on hand-crafted generation grammar.Belz[58]made use of template-based domain specific rules to get CFG(Context Free Grammar)manually. Recent work automatically learned a CFG from aligned corpus. Wong&Mooney[59]proposed an algorithm to learn a synchronous context-free grammar(SCFG)automatically using sentence-semantic frame aligned corpus.The model used left to right Earley chart to map semantic frame to natural language sentences.It re-ranked mapping results using language model in decoding.Lu&Ng[60]proposed a SCFG based forest-tostring generation algorithm.Konstas&Lapata[61]used a bottom-up chart decoder to learn a PCFG from a phrase semantic slots pairs which were harvested from a sentence semantic frame alignment corpus,and re-ranked generative trees combining n-grams and dependency relations,then outputted a sentence with top rank leave nodes.

    The outputted sentences were grammatical in syntax based methods.But it was difficult to obtain good syntaxes.Manual rules were expensive and domain dependent,while grammar learning relied on a large number of aligned corpus.Limited by syntaxes,all above methods could not deal with semantic frames which did not occur in train data.Sentences generated by these methods were lack of diversity.

    Sequence based models took a sentence as a sequence of words or phrases.They predicted the generation probability of next word basing on words already generated.To cover thesemantic frame in generated sentence,the sequence model took the dialogue acts into consider.So,the generation probability ofnth word could be estimated bypθ(wn|w1,...,wn-1;DA),where DA is current dialogue act given by semantic frame,θis parameters for the probability function.

    Several neural network based models,especially RNNs, were used to approximate the probability.Zhang&Lapata [62]described a work on using RNNs to generate Chinese poetry.Wen,Gasic,&Kim,etal.[63]jointly trained a forward RNN generator,a CNN and an inversed RNN ranker,to generate natural sentences for specific DA.Wen,Gasic,& Mrksic[64]used a DA controlgate for sentence planning,and a LSTMfor surface realization.Two parts are jointly trained to generate grammatical sentences and semantically insistent with DA.Mei,Bansal,&Walter[65]proposed an end-to-end, domain independent neural encoder-aligner-decoder model to jointly model content selection,sentence planning and surface realization.A LSTM was firstly used for encoding all semantic slots,and then the salient semantic slots were extracted by an alignment model,finally the natural sentence was generated by a decoder.Dusek&Jurcicek[66]proposed an attention based LSTM to encode inputted DA and words already generated, and then a LSTM decoder together with a logistic classifier were used to generate other words in sequence.They demonstrated their model can achieve comparable performance with other RNN based models with less training data.

    Compared to syntax based models,sequence based models did not need fine grain level alignment data for training.The flexibility of sequence based models on modeling dialogue history,context and word selection brought a diversity of sentences generated.On the other hand,because the generation process of sequence based models was not controlled by any specific syntax,it was unavoidable for them to generate ungrammatical sentences.It was also possible for them to lose or repeat some slots in DA.

    4.4.Joint models for subtasks across NLU,DMand NLG

    It is attractive to jointly model all subtasks of a dialogue system,from NLU,DM to NLG.It is a real end-to-end model which receives user input and output a natural language sentence for response.Butfor a goal-driven task,a response is not the only thing that a model should give at each turn.A successful agent should keep and update a task related record through the whole dialogue in order to make proper selection on dialogue actions.For example,in flight booking,an agent should record information such as time,departure city,and so on in order to implement the reservation operation correctly. Such information should be kept and can be updated through the whole dialogue.There is no full end-to-end model for goal-driven task currently.Most of previous end-to-end models jointly modeled parts of the subtasks as above.

    While for a non-goal-driven task,it is not necessary for a chatterbot to keep so much information,a response is often the only thing that a model should give.Some full end-to-end models for response generation have been proposed recently.

    Data-driven end-to-end response generation has been received many attentions in recent years by borrowing ideas from some other research areas.Among different models for response generation,the model borrowed from machine translation was first proposed by Ritter,Cherry,&Dolan[67]. They made use of phrased based models in traditional statistical machine translation for response generation in social network.The experimental results showed statistical machine translation based models outperformed information retrieval based models.Alone this way,by utilizing the RNN language model,Sordoni,Galley,&Auli,et al.[68]proposed a dynamic-context generative model to address the problem of data sparsity arising when contextual information was integrated into classic statistical models.Serban,Sordoni,& Bengio,et al.[69]extended hierarchical recurrent encoder decoder neural network proposed in Ref.[70],which was original suggested for improving query suggestion,to end-toend dialogue model.

    With the recent advances of sequence-to-sequence models in machine translation,some sequence-to-sequence models for non-goal-driven dialogue were also proposed.Shang,Lu,&Li [71]presented a RNN based encoder-decoder Neural Responding Machine with attention signal,Vinyals&Le[72] proposed a LSTM based sequence-to-sequence Conversational Model.Fig.3 is the typical structure of sequence-to-sequence model.Li,Galley,&Brockett,et al.[73]used Maximum Mutual Information(MMI)as objective function to measures the mutual dependence between inputs and outputs.Experimental results showed MMI helped sequence-to-sequence models to produces more diverse responses.

    These approaches jointly modeled the process from sentence inputs to generation of responses for non-goal-driven dialogues.They did not include semantic parsing and explicit DM,and therefore could not be applied to goal-driven dialogues directly.Dodge,Gane,&Zhang,et al.[74]also considered the difficulty on evaluation of these models.They therefore proposed a set of tasks,including question answering, recommendation,question answering+recommendation and chatting,to test the ability of end-to-end dialogue systems.It might be an interesting way to bridge non-goal-driven dialogues and goal-driven dialogues.On the other hand,there are lots of advances in sequence-to-sequence machine translation models,which might be borrowed to build more powerful goaldriven dialogues in the future.

    It is clear that a fullend-to-end goal-driven dialogue system should not only output a final sentence to respond an input sentence,but also keep and update fruitful internal representations or memories for dialogues.The internal memories can be either explicitly extracted and represented,or validated by some external tasks such as question answering.

    5.Conclusions

    This paper gave a brief survey on goal-driven human computer dialogue systems,including two often used frames and some recent research work on each subtask of dialogue systems.However,the major concern of the paper was jointmodels,which model multiple subtasks of dialogues simultaneously.

    Fig.3.Sequence-to-sequence Conversational Model.

    We considered jointly modeling is one important trend of dialogue systems.In fact,there was a rapid increase of work on joint models in recent years.We tried to survey mostof the related work in the paper,classified them according to which subtasks were taken into the joint models.As we have seen, there were several different types of join models,such as fl at or hierarchical type.There were also several different extents of integration,including integration of several subtasks inside NLG,DM or NLG,jointly modeling subtasks crossing NLG and DM,and jointly modeling the process through NLU,DM and NLG.

    Although the joint models are at their beginning,they have shown some advantages compared with previous pipeline models.One significant advantage of joint models is that they could model interaction relations between different subtasks in a single model to improve the performance of the whole system.Another practical advantage is that joint models might remove some middle representations which were built manually before.It might reduce the subjective of human design and assign a dialogue model more flexible to adapt different tasks in different domains.

    It is not so strange that most of recent joint models were constructed by deep neural networks.Deep neural networks provided some uniform structures and training ways for different subtasks.Reinforcement learning was still the main tool for DM.Although neural networks have long been used in reinforcement learning,it is the recent combination of reinforcement learning and different deep neural networks that brought the deep reinforcement learning,which has pushed the researches on joint models forward greatly.

    Finally,there are lots of problems waiting for solutions in joint models.How to get enough data for building a dialogue system?,How to train a joint model efficiently?,How to adapt a joint model in one domain to another?and so on.Some of the problems have theoretical interests,some of them have practical appeals.

    Acknowledgements

    This paper is partly supported by National Natural Science Foundation of China(No 61273365),discipline building plan in 111 base(No.B08004)and Engineering Research Center of Information Networks of MOE,and the Co-construction Program with the Beijing Municipal Commission of Education.

    [1]S.Seneff,L.Hirschman,V.W.Zue,Interactive problem solving and dialogue in the ATIS domain,in:Proceedings of the Workshop on Speech and Natural Language,1991,pp.354-359.Pacific Grove,California,February 19-22,1991.

    [2]A.L.Gorin,G.Riccardi,J.H.Wright,How may I help you?Speech Commun.23(1)(1997)113-127.

    [3]V.Zue,S.Seneff,J.Glass,et al.,JUPITER:a telephone-based conversational interface for weather information,IEEE Trans.Speech Audio Process.8(1)(2000)85-96.

    [4]M.Walker,J.Aberdeen,J.Boland,et al.,DARPA communicator dialog travel planning systems:the June 2000 data collection,in:Proceedings of 2001 European Conference on Speech Communication and Technology, 2001,pp.1371-1374.

    [5]J.Weizenbaum,ELIZA-a computer program for the study of natural language communication between man and machine,Commun.ACM 9 (1)(1966)36-45.

    [6]S.Young,M.Gasic,B.Thomson,etal.,POMDP-based statistical spoken dialogue systems:a review,Proc.IEEE 101(5)(2013)1160-1179.

    [7]K.Yu,L.Chen,B.Chen,et al.,Cognitive technology in task-oriented dialogue system-concepts,advances and future,Chin.J.Comput.37 (18)(2014)1-17.

    [8]D.Jurafsky,J.H.Martin,Speech and Language Processing:An Introduction to Natural Language Processing,Computational Linguistics,and Speech Recognition,second ed.,Prentice-Hall,2007.

    [9]P.Haffner,G.Tur,J.Wright,Optimizing SVMs for complex call classification,in:Proceedings of International Conference on Acoustics, Speech and Signal Processing,Hong Kong,April,2003,2003.

    [10]J.Ang,Y.Liu,E.Shriberg,Automatic dialog act segmentation and classification in multiparty meetings,in:Proceedings of International Conference on Acoustics,Speech and Signal Processing,2005,2005.

    [11]R.Sarikaya,G.E.Hinton,A.Deoras,Application ofdeep belief networks for natural language understanding,IEEE Trans.Audio,Speech Lang. Process.22(4)(2014)778-784.

    [12]A.Ezen-can,K.E.Boyer,Unsupervised classification of studentdialogue acts with query-likelihood clustering,Int.Conf.Educ.Data Min.(2013) 20-27.

    [13]A.Ezen-can,K.E.Boyer,Combining task and dialogue streams in unsupervised dialogue act models,in:Proceedings of 2014 Special Interest Group On Dialogue(SIGDIAL)Conference,Association for Computational Linguistics,2014,pp.113-122.Philadelphia,U.S.A., 18-20 June 2014.

    [14]K.Yao,G.Zweig,M.Hwang,et al.,Recurrent neural networks for language understanding,Interspeech(2013)2524-2528.

    [15]G.Mesnil,Y.Dauphin,K.Yao,et al.,Using recurrent neural networks for slot filling in spoken language understanding,IEEE/ACM Trans. Audio Speech Lang.Process.23(3)(2015)530-539.

    [16]V.Vukotic,C.Raymond,Gravier,et al.,Is it time to switch to word embedding and recurrent neural networks for spoken language understanding?,in:Interspeech2015,2015.

    [17]G.Tur,D.Z.Hakkani-Tur,D.Hillard,et al.,Towards unsupervised spoken language understanding:exploiting query click logs for slot filling,in:Proceedings of 2011 INTERSPEECH,2011,pp.1293-1296.

    [18]G.Tur,A.Celikyilmaz,D.Z.Hakkani-Tur,Latent semantic modeling for slot filling in conversational understanding,in:Proceedings of 2013 IEEE International Conference on Acoustics,Speech and Signal Processing,2013.

    [19]Y.Chen,W.Y.Wang,A.Gershman,et al.,Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding,in:ACL-IJCNLP2015,2015,pp.483-494.

    [20]Y.Chen,W.Y.Wang,A.I.Rudnicky,Leveraging frame semantics and distributional semantics for unsupervised semantic slot induction in spoken dialogue systems,in:Proceedings of 2014 Spoken Language Technology(SLT),2014.

    [21]Y.Chen,W.Y.Wang,A.Gershman,et al.,Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding,in:Proceedings of 2015 ACL-IJCNLP,2015,pp.483-494.

    [22]Y.Chen,W.Y.Wang,A.I.Rudnicky,Jointly modeling inter-slotrelations by random walk on knowledge graphs for unsupervised spoken language understanding,in:Proceedings of 2015 NAACL-HLT,2015.

    [23]J.D.Williams,Challenges and opportunities for state tracking in statistical spoken dialog systems:results from two public deployment,J.Sel. Top.Signal Process.6(8)(2012)959-970.

    [24]J.Williams,A.Raux,D.Ramachandran,et al.,The dialog state tracking challenge,in:Proceedings of 2013 Special Interest Group On Dialogue(SIGDIAL),2013,pp.404-413.Metz,France,August.

    [25]M.Henderson,B.Thomson,J.Williams,The second dialog state tracking challenge,in:Proceedings of 2014 Special Interest Group On Dialogue(SIGDIAL)Conference,Association for Computational Linguistics,2014a,pp.263-272.Philadelphia,U.S.A.,18-20 June 2014.

    [26]M.Henderson,B.Thomson,J.Williams,The third dialog state tracking challenge,in:Proceedings of 2014 Spoken Language Technology(SLT), 2014,pp.2-7.

    [27]S.Lee,Structured discriminative model for dialog state tracking,in: Proceedings of 2013 Special Interest Group On Dialogue(SIGDIAL), Metz,France,2013.

    [28]H.Ren,W.Xu,Y.Zhang,et al.,Dialog state tracking using conditional random field,in:Proceedings of 2013 Special Interest Group On Dialogue(SIGDIAL),Metz,France,2013.

    [29]M.Henderson,B.Thomson,S.Young,Deep neural network approach for the dialog state tracking challenge,in:Proceedings of 2013 Special Interest Group On Dialogue(SIGDIAL),Metz,France,2013.

    [30]Q.Xie,K.Sun,S.Zhu,etal.,Recurrent polynomial network for dialogue state tracking with mismatched semantic parsers,in:Proceedings of2015 Special Interest Group On Dialogue(SIGDIAL),2015,pp.295-304.

    [31]M.Gasic,S.Young,Gaussian processes for POMDP-based dialogue manager optimization,IEEE/ACM Trans.Audio Speech Lang.Process. 22(1)(2014)28-40.

    [32]I.Casanueva,T.Hain,H.Christensen,etal.,Knowledge transfer between speakers for personalised dialogue management,in:Proceedings of 2015 Special Interest Group On Dialogue(SIGDIAL),2015.

    [33]M.Barlier,J.Perolat,R.Laroche,et al.,Human-machine dialogue as a stochastic game,in:Proceedings of 2015 Special Interest Group On Dialogue(SIGDIAL),2015.

    [34]E.Reiter,R.Dale,Building applied natural language generation systems, Nat.Lang.Eng.3(1)(1997)57-87.

    [35]M.Stone,C.Doran,Sentence planning as description using tree adjoining grammar,in:Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics,1997,pp.198-205.Madrid,Spain.

    [36]O.Dusek,F.Jurcicek,Training a natural language generator from unaligned data,in:Proceedings of the 53rd ACL and the 7th IJCNLP,2015, pp.451-461.Beijing,China.

    [37]B.Lavoie,O.Rambow,A fast and portable realizer for text generation systems,in:Proceedings of the Fifth Conference on Applied Natural Language Processing,ANLC,1997,pp.265-268.Washington,DC.

    [38]M.White,J.Baldridge,Adapting chart realization to CCG,in:Proceedings of 9th European Workshop on Natural Language Generation, Budapest,Hungary,2003.

    [39]T.Zhao,M.Eskenazi,(2016).Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning, arXiv.https://arxiv.org/abs/1606.02560,15 Sep 2016.

    [40]P.Xu,R.Sarikaya,Convolutional neural network based triangular crffor joint intent detection and slot filling,in:2013 IEEE Workshop on Automatic Speech Recognition and Understanding(ASRU),2013,pp.78-83.

    [41]D.Guo,G.Tur,W.Yih,etal.,Joint semantic utterance classification and slot filling with recursive neural networks,in:Proceedings of Spoken Language Technology2014,2014,pp.554-559.

    [42]Y.Shi,K.Yao,H.Chen,et al.,Contextual spoken language understanding using recurrent neural networks,in:Proceedings of 2015 IEEE International Conference on Acoustics,Speech and Signal Processing, 2015,pp.5271-5275.

    [43]Q.Zhou,L.Wen,X.Wang,A hierarchical LSTM modelfor joint tasks, in:Proceedings of The Fifteenth China National Conference on Computational Linguistics(CCL 2016),2016.

    [44]C.Lee,Y.Ko,A simultaneous recognition framework for the spoken language understanding module of intelligent personal assistant software on smart phones,in:Proceedings of 2015 ACL-IJCNLP Conference, 2015,pp.818-822.

    [45]M.Yazdani,J.Henderson,A model of zero-shot learning of spoken language understanding,in:Proceedings of the 2015 Conference on Empirical Methods on Natural Language Processing(EMNLP2015),2015.

    [46]M.Henderson,B.Thomson,S.Young,Word-based dialog state tracking with recurrent neural networks,in:Proceedings of 2014 Special Interest Group On Dialogue(SIGDIAL),2014.

    [47]N.Mrksic,R.Kadlex,et al.,Multi-domain dialog state tracking using recurrent neural networks,in:Proceedings of 2015 ACL-IJCNLP,2015, pp.794-799.

    [48]V.Mnih,K.Kavukcuoglu,D.Silver,et al.,Playing Atari with deep reinforcement learning,Comput.Sci.(19 Dec 2013)https://arxiv.org/abs/ 1312.5602.

    [49]V.Mnih,K.Kavukcuoglu,D.Silver,etal.,Human-level control through deep reinforcement learning,Nature 518(2015)529-533.Feb 26,2015.

    [50]K.Narasimhan,T.D.Kulkarni,R.Barzilay,Language understanding for text-based games using deep reinforcementlearning.http://arxiv.org/abs/ 1506.089412,2015.Sep.11,2015.

    [51]H.Cuayahuitl,S.Keizer,Strategic dialogue management via deep reinforcement learning,in:NIPS Deep Reinforcement Learning Workshop,2015.

    [52]H.Cuayahuitl,SimpleDS:a simple deep reinforcement learning dialogue system.http://arxiv.org/abs/1601.04574,2016.

    [53]K.Narasimhan,T.D.Kulkarni,R.Barzilay,Language understanding for text-based games using deep reinforcement learning,Comput.Sci.40(4) (2015)1-5.

    [54]L.Lin,Reinforcement Learning for Robots Using Neural Networks, Technical report,DTIC Document,1993.

    [55]H.V.Hasselt,A.Guez,D.Silver,Deep reinforcement learning with double q-learning,arXiv preprint,http://arxiv.org/abs/1509.06461,2015.

    [56]Y.Schaul,J.Quan,I.Antonoglou,et al.,Prioritized experience replay, arXiv preprint,http://arxiv.org/abs/1511.05952,2015.

    [57]Y.Ma,X.Wang,A two layer LSTMbased deep reinforcement learning for goal-driven dialogue system,BUPT CIST Tech Report,No 20161201, 2016.

    [58]A.Belz,Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models,Nat.Lang.Eng.14(4) (2008)431-455.

    [59]Y.W.Wong,R.Mooney,Generation by inverting a semantic parser that uses statistical machine translation,in:Proceedings of the Human Language Technology and the Conference of the North American Chapter of the Association for Computational Linguistics,2007,pp.172-179.Rochester,NY.

    [60]W.Lu,H.T.Ng,A probabilistic forest-to-string model for language generation from typed lambda calculus expressions,in:Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing,2011,pp.1611-1622.Edinburgh,Scotland,UK.

    [61]I.Konstas,M.Lapata,A global model for concept-to-text generation, J.Artif.Intell.Res.48(2013)(2013)305-346.

    [62]X.Zhang,M.Lapata,Chinese poetry generation with recurrent neural networks,in:Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing EMNLP(EMNLP2014),2014.

    [63]T.H.Wen,M.Gasic,D.Kim,et al.,Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking,in:Proceedings of 2015 Special Interest Group On Dialogue(SIGDIAL),2015.

    [64]T.H.Wen,M.Gasic,N.Mrksic,Semantically conditioned lstm-based natural language generation for spoken dialogue systems,in:Proceedings of Conference on Empirical Methods on Natural Language Processing(EMNLP2015),2015.

    [65]H.Mei,M.Bansal,M.R.Walter,Whatto talk about and how?Selective generation using LSTMs with coarse-to-fine alignment.http://arxiv.org/ abs/1509.00838,2015.

    [66]O.Dusek,F.Jurcicek,Sequence-to-Sequence generation for spoken dialogue via deep syntax trees and strings,in:Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics,2016, pp.45-51.Berlin,Germany.

    [67]A.Ritter,C.Cherry,W.B.Dolan,Data-driven response generation in social media,in:Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing,Association for Computational Linguistics,2011,pp.583-593.

    [68]A.Sordoni,M.Galley,M.Auli,et al.,A neural network approach to context-sensitive generation of conversational responses,in:Proceedings of 2015 North American Association of Computational Linguistics(NAACL),2015,pp.196-205.

    [69]I.V.Serban,A.Sordoni,Y.Bengio,et al.,Building end-to-end dialogue systems using generative hierarchical neural network models.http:// arxiv.org/abs/1507.04808,2016.

    [70]A.Sordoni,Y.Bengio,H.Vahabi,et al.,A hierarchical recurrent encoder-decoder for generative context-aware query suggestion,in: Proceedings of the ACM International Conference on Information Knowledge and Management(CIKM),2015.

    [71]L.Shang,Z.Lu,H.Li,Neural responding machine for short-text conversation,in:Proceedings of ACL-IJCNLP,2015,pp.1577-1586.

    [72]O.Vinyals,Q.V.Le,A neural conversational model,in:Proceedings of 2015 International Conference on Machine Learning(ICML2015),2015.

    [73]J.Li,M.Galley,C.Brockett,et al.,A diversity-promoting objective function for neural conversation models.http://arxiv.org/abs/1510.03055, 2015.

    [74]J.Dodge,A.Gane,X.Zhang,etal.,Evaluating prerequisite qualities for learning end-to-end dialog system,in:Proceedings of 2016 International Conference on Learning Representations(ICLR2016),2016.

    Xiaojie Wangreceived his Ph.D.degree from Beihang University in 1996.He is a professor and director of the Centre for Intelligence Science and Technology at Beijing University of Posts and Telecommunications.His research interests include Natural Language Processing and multi-modal cognitive computing.He is an executive member of the council of Chinese Association of Artificial Intelligence,director of natural language processing committee.He isa member of council of Chinese Information Processing Society and member of Chinese Processing Committee of China Computer Federation.

    Caixia Yuanreceived the Ph.D.degree in 2009 from Beijing University of Posts and Telecommunications, China and the University of Tokushima,Japan.She is currently working at Beijing University of Posts and Telecommunications.Her research interests include natural language understanding,man-machine dialogue,and machine learning.

    Available online 23 December 2016

    *Corresponding author.

    E-mail addresses:xjwang@bupt.edu.cn(X.Wang),yuancx@bupt.edu.cn (C.Yuan).

    Peer review under responsibility of Chongqing University of Technology.

    http://dx.doi.org/10.1016/j.trit.2016.12.004

    2468-2322/Copyright?2016,Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NCND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    Copyright?2016,Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    亚洲国产欧美在线一区| 国产成人影院久久av| 精品国产一区二区久久| 精品国内亚洲2022精品成人 | 狂野欧美激情性xxxx| 国产片内射在线| 一本久久精品| 免费黄频网站在线观看国产| 国产av精品麻豆| 日韩有码中文字幕| 丁香六月欧美| 91字幕亚洲| avwww免费| 免费观看av网站的网址| 99热网站在线观看| 黑人操中国人逼视频| 亚洲 国产 在线| 国产欧美日韩精品亚洲av| 又大又爽又粗| av一本久久久久| 男人爽女人下面视频在线观看| www.自偷自拍.com| 色94色欧美一区二区| 精品国产一区二区三区久久久樱花| 1024视频免费在线观看| 色婷婷av一区二区三区视频| 法律面前人人平等表现在哪些方面 | 大型av网站在线播放| 天堂中文最新版在线下载| 亚洲国产欧美一区二区综合| 99国产精品一区二区蜜桃av | 美女午夜性视频免费| 国产一卡二卡三卡精品| 成年人免费黄色播放视频| 90打野战视频偷拍视频| 高清黄色对白视频在线免费看| 国产精品久久久久久人妻精品电影 | 国产欧美亚洲国产| 精品国产乱码久久久久久男人| 电影成人av| 大香蕉久久成人网| 人人妻人人澡人人看| 午夜成年电影在线免费观看| 咕卡用的链子| 男人舔女人的私密视频| 亚洲男人天堂网一区| 人成视频在线观看免费观看| 免费观看a级毛片全部| 极品人妻少妇av视频| 成人影院久久| 欧美日韩av久久| 亚洲第一av免费看| 亚洲九九香蕉| 国产精品秋霞免费鲁丝片| 黄色怎么调成土黄色| 黑人巨大精品欧美一区二区mp4| 午夜福利视频在线观看免费| 一区二区日韩欧美中文字幕| 国产精品.久久久| 在线看a的网站| 十八禁人妻一区二区| 国产男人的电影天堂91| 一级a爱视频在线免费观看| 日韩视频在线欧美| 桃红色精品国产亚洲av| 日日爽夜夜爽网站| 成年av动漫网址| 女性生殖器流出的白浆| 性少妇av在线| 欧美乱码精品一区二区三区| h视频一区二区三区| 交换朋友夫妻互换小说| 亚洲国产精品一区三区| 伊人久久大香线蕉亚洲五| 19禁男女啪啪无遮挡网站| 69精品国产乱码久久久| 人妻人人澡人人爽人人| 国产真人三级小视频在线观看| 高清黄色对白视频在线免费看| 欧美日韩亚洲国产一区二区在线观看 | 最新的欧美精品一区二区| 久久午夜综合久久蜜桃| 成年av动漫网址| 国产男女超爽视频在线观看| 国产亚洲欧美精品永久| 99热全是精品| 美女高潮到喷水免费观看| 三级毛片av免费| 精品卡一卡二卡四卡免费| 国产日韩欧美亚洲二区| 视频在线观看一区二区三区| 国产一区二区 视频在线| 蜜桃国产av成人99| 丝袜美腿诱惑在线| 男女高潮啪啪啪动态图| 三上悠亚av全集在线观看| 欧美日韩中文字幕国产精品一区二区三区 | 久久精品亚洲av国产电影网| 亚洲色图综合在线观看| 波多野结衣av一区二区av| 精品免费久久久久久久清纯 | 18禁裸乳无遮挡动漫免费视频| 超碰成人久久| 黄片小视频在线播放| 欧美成狂野欧美在线观看| 黄片播放在线免费| 国产精品影院久久| 久久免费观看电影| 国产精品一区二区在线不卡| 亚洲伊人色综图| 免费观看人在逋| 一级片'在线观看视频| 黄色视频在线播放观看不卡| 亚洲精品粉嫩美女一区| 色综合欧美亚洲国产小说| 亚洲国产精品成人久久小说| 国产欧美亚洲国产| 美女扒开内裤让男人捅视频| 亚洲精品中文字幕在线视频| 午夜福利在线免费观看网站| 午夜成年电影在线免费观看| 黑人巨大精品欧美一区二区蜜桃| 国产精品 国内视频| 91大片在线观看| 美女视频免费永久观看网站| 精品久久久久久久毛片微露脸 | 久久精品国产a三级三级三级| 国产伦人伦偷精品视频| 青青草视频在线视频观看| 老司机靠b影院| 成年人免费黄色播放视频| 三级毛片av免费| 欧美性长视频在线观看| 亚洲男人天堂网一区| 人人妻人人澡人人看| 两性午夜刺激爽爽歪歪视频在线观看 | 日韩,欧美,国产一区二区三区| 啪啪无遮挡十八禁网站| 操美女的视频在线观看| 久久人人爽人人片av| 日韩三级视频一区二区三区| 日本撒尿小便嘘嘘汇集6| 两性午夜刺激爽爽歪歪视频在线观看 | 成人国产av品久久久| 中文字幕制服av| 欧美激情高清一区二区三区| 欧美久久黑人一区二区| 在线观看人妻少妇| 免费人妻精品一区二区三区视频| 欧美日韩国产mv在线观看视频| 国产精品香港三级国产av潘金莲| 久久精品久久久久久噜噜老黄| 国产成+人综合+亚洲专区| 国产日韩欧美在线精品| 欧美xxⅹ黑人| 最黄视频免费看| 好男人电影高清在线观看| 搡老岳熟女国产| 久久久国产精品麻豆| 最新在线观看一区二区三区| 亚洲成人手机| 少妇粗大呻吟视频| 久久天堂一区二区三区四区| 香蕉丝袜av| 一本—道久久a久久精品蜜桃钙片| 最黄视频免费看| 久久 成人 亚洲| 亚洲av成人不卡在线观看播放网 | 亚洲av男天堂| 精品国产国语对白av| 午夜影院在线不卡| 天堂中文最新版在线下载| 高清av免费在线| 国产老妇伦熟女老妇高清| 两个人免费观看高清视频| 欧美在线一区亚洲| 国产亚洲欧美精品永久| 国产精品久久久久久精品电影小说| 国产淫语在线视频| 欧美精品人与动牲交sv欧美| 欧美国产精品一级二级三级| 建设人人有责人人尽责人人享有的| 国产精品二区激情视频| 国产av又大| 亚洲情色 制服丝袜| 狂野欧美激情性bbbbbb| 日韩 欧美 亚洲 中文字幕| 欧美精品人与动牲交sv欧美| 色综合欧美亚洲国产小说| 无限看片的www在线观看| av在线app专区| 亚洲精品粉嫩美女一区| 免费人妻精品一区二区三区视频| 国产激情久久老熟女| av不卡在线播放| 男女午夜视频在线观看| 国产伦人伦偷精品视频| 美女扒开内裤让男人捅视频| 下体分泌物呈黄色| 欧美黑人精品巨大| 在线观看免费日韩欧美大片| 国产精品一区二区免费欧美 | 国产精品成人在线| 人人澡人人妻人| 韩国高清视频一区二区三区| 亚洲国产精品999| 1024视频免费在线观看| 国产激情久久老熟女| 亚洲精品第二区| 免费在线观看完整版高清| 欧美人与性动交α欧美精品济南到| 99国产精品99久久久久| 免费黄频网站在线观看国产| 中文字幕另类日韩欧美亚洲嫩草| 国产亚洲欧美精品永久| 又紧又爽又黄一区二区| 黄片小视频在线播放| 亚洲av日韩在线播放| 又大又爽又粗| 搡老乐熟女国产| 国产视频一区二区在线看| www.999成人在线观看| 国产精品成人在线| 黄色怎么调成土黄色| 亚洲成人手机| 妹子高潮喷水视频| 日韩视频一区二区在线观看| 亚洲精品乱久久久久久| 在线十欧美十亚洲十日本专区| 久久久精品免费免费高清| 免费看十八禁软件| 亚洲精品国产色婷婷电影| 国产精品久久久久久精品古装| 亚洲,欧美精品.| 黄片大片在线免费观看| 91精品国产国语对白视频| 午夜福利视频在线观看免费| 满18在线观看网站| 可以免费在线观看a视频的电影网站| 搡老熟女国产l中国老女人| 女人高潮潮喷娇喘18禁视频| 男女免费视频国产| 成人三级做爰电影| 97人妻天天添夜夜摸| 成年动漫av网址| 欧美亚洲 丝袜 人妻 在线| 中文字幕av电影在线播放| 妹子高潮喷水视频| www.av在线官网国产| 69精品国产乱码久久久| 国产精品欧美亚洲77777| 女人爽到高潮嗷嗷叫在线视频| av线在线观看网站| 午夜福利在线免费观看网站| 国产精品亚洲av一区麻豆| 国产精品一区二区精品视频观看| 他把我摸到了高潮在线观看 | 国产成人精品无人区| 午夜久久久在线观看| 手机成人av网站| 欧美黑人精品巨大| 十八禁人妻一区二区| 91老司机精品| 每晚都被弄得嗷嗷叫到高潮| 精品国产乱码久久久久久男人| 亚洲avbb在线观看| 啦啦啦啦在线视频资源| 五月开心婷婷网| 一本—道久久a久久精品蜜桃钙片| 午夜激情av网站| 黄色视频不卡| 午夜老司机福利片| 精品国产国语对白av| 精品视频人人做人人爽| 美女大奶头黄色视频| 丰满人妻熟妇乱又伦精品不卡| 国产一级毛片在线| 国产欧美日韩一区二区三 | 99热国产这里只有精品6| 国产成人精品久久二区二区免费| 丁香六月天网| 中亚洲国语对白在线视频| 99久久综合免费| 国产野战对白在线观看| 久久精品国产亚洲av高清一级| 黑人猛操日本美女一级片| 国产精品1区2区在线观看. | 久久久国产精品麻豆| 69精品国产乱码久久久| 久久国产亚洲av麻豆专区| 国产一区有黄有色的免费视频| 一二三四社区在线视频社区8| 制服诱惑二区| av在线播放精品| 国产亚洲精品第一综合不卡| av欧美777| 纵有疾风起免费观看全集完整版| 可以免费在线观看a视频的电影网站| 美女福利国产在线| 天天躁夜夜躁狠狠躁躁| 亚洲精品美女久久av网站| 亚洲美女黄色视频免费看| 在线av久久热| 99久久综合免费| 自线自在国产av| 欧美激情高清一区二区三区| 日本vs欧美在线观看视频| 国产成人精品久久二区二区免费| 12—13女人毛片做爰片一| 国产欧美日韩一区二区三 | 多毛熟女@视频| 飞空精品影院首页| 免费观看人在逋| 国产激情久久老熟女| 国产精品99久久99久久久不卡| 80岁老熟妇乱子伦牲交| 国产欧美日韩一区二区精品| 亚洲国产精品一区三区| 久久久精品区二区三区| 十八禁网站免费在线| 国产精品久久久久久人妻精品电影 | 国产有黄有色有爽视频| 久久九九热精品免费| 我的亚洲天堂| 久久国产精品影院| 人妻 亚洲 视频| 又大又爽又粗| 人人妻人人添人人爽欧美一区卜| 日韩欧美国产一区二区入口| 国产av精品麻豆| 久久久久国产一级毛片高清牌| 又紧又爽又黄一区二区| 捣出白浆h1v1| 99久久国产精品久久久| 岛国在线观看网站| 蜜桃在线观看..| 99热国产这里只有精品6| 夜夜骑夜夜射夜夜干| 亚洲欧美色中文字幕在线| 久久女婷五月综合色啪小说| 精品一区二区三区四区五区乱码| 午夜日韩欧美国产| 丁香六月天网| 男女之事视频高清在线观看| 午夜成年电影在线免费观看| 国产精品99久久99久久久不卡| 日本黄色日本黄色录像| 国产色视频综合| 黄色a级毛片大全视频| 国产精品一二三区在线看| 精品久久久久久久毛片微露脸 | 我的亚洲天堂| 飞空精品影院首页| 久久人妻福利社区极品人妻图片| 中文字幕色久视频| 欧美日韩av久久| 精品乱码久久久久久99久播| 免费人妻精品一区二区三区视频| 亚洲成人手机| 国产精品香港三级国产av潘金莲| 精品一区二区三区四区五区乱码| 国产欧美日韩综合在线一区二区| 国产成人a∨麻豆精品| 亚洲成人国产一区在线观看| 中文字幕另类日韩欧美亚洲嫩草| 国产精品一区二区免费欧美 | 中文欧美无线码| 国产精品熟女久久久久浪| 19禁男女啪啪无遮挡网站| 国产片内射在线| 国产野战对白在线观看| 亚洲精品美女久久av网站| 国产人伦9x9x在线观看| 日本91视频免费播放| 亚洲性夜色夜夜综合| 日日夜夜操网爽| 精品一品国产午夜福利视频| 在线观看免费午夜福利视频| 欧美性长视频在线观看| 成人免费观看视频高清| 国产成人一区二区三区免费视频网站| 亚洲 欧美一区二区三区| 午夜福利视频在线观看免费| 成人手机av| 五月天丁香电影| 久久久国产精品麻豆| 国产日韩一区二区三区精品不卡| 国产亚洲精品第一综合不卡| 丝袜人妻中文字幕| 美女国产高潮福利片在线看| 99国产综合亚洲精品| 丁香六月欧美| 亚洲成人国产一区在线观看| 欧美中文综合在线视频| 精品人妻1区二区| 这个男人来自地球电影免费观看| netflix在线观看网站| 亚洲成av片中文字幕在线观看| 亚洲精品国产一区二区精华液| 丝袜喷水一区| 中文字幕人妻熟女乱码| 亚洲欧美精品综合一区二区三区| 少妇精品久久久久久久| 天天操日日干夜夜撸| 亚洲av片天天在线观看| 日本91视频免费播放| 老司机影院毛片| 少妇猛男粗大的猛烈进出视频| 99国产精品免费福利视频| 国产片内射在线| 成年动漫av网址| 久久久久国产一级毛片高清牌| 精品人妻一区二区三区麻豆| 嫁个100分男人电影在线观看| 欧美激情久久久久久爽电影 | 涩涩av久久男人的天堂| 成人黄色视频免费在线看| 黑人欧美特级aaaaaa片| 欧美久久黑人一区二区| 国产亚洲精品第一综合不卡| 亚洲激情五月婷婷啪啪| 美国免费a级毛片| 在线精品无人区一区二区三| 首页视频小说图片口味搜索| 国产精品1区2区在线观看. | 免费看十八禁软件| 91精品国产国语对白视频| av欧美777| 性少妇av在线| 女人被躁到高潮嗷嗷叫费观| 欧美精品一区二区大全| 99久久综合免费| 欧美黄色片欧美黄色片| 久久久久国产一级毛片高清牌| 乱人伦中国视频| 国产欧美日韩一区二区精品| 精品国产超薄肉色丝袜足j| 亚洲av欧美aⅴ国产| 亚洲欧美清纯卡通| 99国产精品免费福利视频| 婷婷丁香在线五月| 日本五十路高清| 在线观看免费日韩欧美大片| 国产精品久久久久成人av| 国产av精品麻豆| 丁香六月欧美| 精品少妇黑人巨大在线播放| 丰满迷人的少妇在线观看| 少妇被粗大的猛进出69影院| 一本—道久久a久久精品蜜桃钙片| 国产高清视频在线播放一区 | 亚洲国产精品成人久久小说| 日韩三级视频一区二区三区| 国产在线视频一区二区| 极品人妻少妇av视频| 水蜜桃什么品种好| 大香蕉久久网| 成人三级做爰电影| 婷婷成人精品国产| 黄频高清免费视频| 久久影院123| 亚洲精品一区蜜桃| 18在线观看网站| 亚洲国产精品成人久久小说| 亚洲欧美精品自产自拍| 超碰成人久久| 老司机影院毛片| 天堂8中文在线网| 一区福利在线观看| 国产区一区二久久| 欧美日韩亚洲高清精品| 久久免费观看电影| 亚洲专区字幕在线| 男女床上黄色一级片免费看| 欧美少妇被猛烈插入视频| 精品国产一区二区三区四区第35| 亚洲第一欧美日韩一区二区三区 | 777久久人妻少妇嫩草av网站| 久久中文字幕一级| 午夜福利视频精品| 欧美激情极品国产一区二区三区| 黑人欧美特级aaaaaa片| 在线永久观看黄色视频| 2018国产大陆天天弄谢| 日韩视频在线欧美| 一级片'在线观看视频| 日韩 欧美 亚洲 中文字幕| 亚洲av男天堂| 乱人伦中国视频| 国产熟女午夜一区二区三区| 亚洲精品在线美女| 国产色视频综合| 久久人人爽人人片av| 母亲3免费完整高清在线观看| 人妻人人澡人人爽人人| 国产欧美亚洲国产| 两个人看的免费小视频| 两个人免费观看高清视频| 国产精品免费大片| 午夜免费成人在线视频| 亚洲天堂av无毛| 国产精品国产三级国产专区5o| 啪啪无遮挡十八禁网站| 午夜福利影视在线免费观看| 国产在线免费精品| 日本av手机在线免费观看| 蜜桃在线观看..| 一级片'在线观看视频| 久久久国产欧美日韩av| 国产成人一区二区三区免费视频网站| 久久久久久人人人人人| 亚洲美女黄色视频免费看| 国产精品一二三区在线看| av网站免费在线观看视频| av免费在线观看网站| cao死你这个sao货| 美国免费a级毛片| 亚洲av日韩精品久久久久久密| 成人18禁高潮啪啪吃奶动态图| 日韩 欧美 亚洲 中文字幕| 女人精品久久久久毛片| www.自偷自拍.com| 欧美日韩视频精品一区| 日韩一区二区三区影片| 亚洲五月色婷婷综合| 叶爱在线成人免费视频播放| 99久久99久久久精品蜜桃| 五月开心婷婷网| 久久 成人 亚洲| 岛国毛片在线播放| 啦啦啦中文免费视频观看日本| 亚洲国产精品999| 男女高潮啪啪啪动态图| av欧美777| 人妻久久中文字幕网| 亚洲国产毛片av蜜桃av| 日本wwww免费看| 男人舔女人的私密视频| 免费观看a级毛片全部| 蜜桃国产av成人99| 久久精品亚洲av国产电影网| 黑人巨大精品欧美一区二区mp4| 啪啪无遮挡十八禁网站| 在线 av 中文字幕| 久久人妻熟女aⅴ| 国产亚洲精品一区二区www | 一级毛片女人18水好多| 亚洲成av片中文字幕在线观看| 国产一区二区 视频在线| 精品乱码久久久久久99久播| 各种免费的搞黄视频| 狠狠精品人妻久久久久久综合| 两性夫妻黄色片| 水蜜桃什么品种好| 久久中文字幕一级| 黑人操中国人逼视频| 十八禁网站免费在线| 高清在线国产一区| 80岁老熟妇乱子伦牲交| 十分钟在线观看高清视频www| 12—13女人毛片做爰片一| 国产熟女午夜一区二区三区| 久久久久国产精品人妻一区二区| 中文字幕另类日韩欧美亚洲嫩草| 午夜福利影视在线免费观看| 大香蕉久久网| 国产精品一区二区免费欧美 | 色综合欧美亚洲国产小说| 欧美av亚洲av综合av国产av| 免费不卡黄色视频| 老熟女久久久| 一区二区av电影网| 天天躁夜夜躁狠狠躁躁| 国产欧美日韩综合在线一区二区| 老熟妇仑乱视频hdxx| 亚洲一卡2卡3卡4卡5卡精品中文| 婷婷色av中文字幕| 别揉我奶头~嗯~啊~动态视频 | 国产黄色免费在线视频| 精品少妇一区二区三区视频日本电影| 亚洲精品日韩在线中文字幕| 日韩一区二区三区影片| 成人手机av| 国产一级毛片在线| 日本一区二区免费在线视频| 好男人电影高清在线观看| 亚洲精品自拍成人| 建设人人有责人人尽责人人享有的| 国产精品成人在线| 成人av一区二区三区在线看 | 麻豆乱淫一区二区| 一区二区三区精品91| 亚洲精品中文字幕在线视频| 久久久国产成人免费| 国产欧美日韩一区二区三区在线| 中文字幕人妻熟女乱码| 欧美 日韩 精品 国产| 亚洲一卡2卡3卡4卡5卡精品中文| 飞空精品影院首页| 日韩有码中文字幕| 日韩制服丝袜自拍偷拍| 菩萨蛮人人尽说江南好唐韦庄| 亚洲精品一区蜜桃| 桃花免费在线播放| 午夜福利视频在线观看免费| 大码成人一级视频| 成年动漫av网址| 国产人伦9x9x在线观看| 亚洲国产精品999| 精品少妇久久久久久888优播| 国产在线免费精品| 精品卡一卡二卡四卡免费| 国产精品av久久久久免费| 老司机深夜福利视频在线观看 | 国产成人精品久久二区二区免费|