• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Dependency-Based Local Attention Approach to Neural Machine Translation

    2019-05-10 03:59:42JingQiuYanLiuYuhanChaiYaqiSiShenSuLeWangandYueWu
    Computers Materials&Continua 2019年5期

    Jing Qiu,Yan Liu,Yuhan Chai,Yaqi Si,Shen Su, ,Le Wang, and Yue Wu

    Abstract:Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation (NMT) model as a unit in various ways.However,these works are all limited to the use of dependency information to enrich the hidden states of source words.Since many works in Statistical Machine Translation (SMT) and NMT have proven the validity and potential of using dependency information.We believe that there are still many ways to apply dependency information in the NMT structure.In this paper,we explore a new way to use dependency information to improve NMT.Based on the theory of local attention mechanism,we present Dependency-based Local Attention Approach (DLAA),a new attention mechanism that allowed the NMT model to trace the dependency words related to the current translating words.Our work also indicates that dependency information could help to supervise attention mechanism.Experiment results on WMT 17 Chineseto-English translation task shared training datasets show that our model is effective and perform distinctively on long sentence translation.

    Keywords:Neural machine translation,attention mechanism,dependency parsing.

    1 Introduction

    Recently,Neural Machine Translation with attention-based encoder-decoder framework[Bahdanau,Cho and Bengio (2014)] has achieved state-of-the-art performances in many translation tasks.Typically,the encoder maps the necessary information of a source sentence into the corresponding hidden state vectors.According to the words currently being translated,these hidden state vectors are then assigned different weights by the attention mechanism.Finally,those weighted hidden state vectors are combined as a fixed length context vector that given in the decoder to generate translations.Therefore,enrich source sentences by various linguistic knowledge so that the encoder could learn more informative hidden state vectors is a hotspot direction of recent study.Among all linguistic knowledge,lexical knowledge,syntax,and semantics are three aspects that are currently prevalently applied in machine translation.As syntactic dependency trees can well represent dependency relationships between long-distance words among a sentence,there have been some works successfully introduced dependency information into NMT.Such as adding dependency label to each token of source sentences[Bojar,Chatterjee,Federmann et al.(2016)]or organizing related dependency information into a single unit for later use is all proven to be practicable[Chen,Wang,Utiyama et al.(2017)].There is also some work,such as[Wu,Zhou and Zhang(2017)],independently learning dependency information to generate dependency hidden state vectors by increase another encoder.

    However,the method of boosting the encoder-decoder framework by adding a lot of extra information to the encoder side may put the additional burden to the model itself.For example,the computational complexity may be increased.As stated in Chen et al.[Chen,Wang,Utiyama et al.(2017)],their model is 25%slower than the compared standard NMT model.We assume another potential problem is dependency information doesn’t be used adequate,for just simply joint the dependency information in the encoder side.

    In this paper,we propose a novel attention approach.We consider that while enriching source sentences that let the encoder could learn more informative information is very important in the encoder-decoder framework,however,attention mechanism in the decoder side is the most efficient part that influences the framework to generate the correct translation.Therefore,we present Dependency-based Local Attention Approach(DLAA),a new type of attention mechanism to improve NMT.DLAA based on the theory of local attention mechanism.Levering the dependency information influence the attention mechanism to retrospect on the source words that semantic related to current translating(Section 5).In this way,not only long-distance words that in terms of current translating could be captured,a more accurate translation model could be trained by rationally explore the extra semantic or syntactic information.

    Experimentally,we prove that our approach is effective in the translation task between Chinese and English.Results show that our approach worked effectively and performed distinctively on long sentence translation.

    2 Related work

    As the modeling formulation of neural machine translation encoder-decoder framework is overly simplistic[Cohn,Hoang,Vymolova et al.(2016)]and in terms of the alignment accuracy,attention-based NMT model is not as good as the conventional statistics alignment model[Liu,Utiyama,Finch et al.(2016)].Therefore,under those considerations,many attempts have been tried to make an improvement.

    Merging knowledge of linguistics[Wang,Wang,Guo et al.(2018)]has been proved to be valid to improve the performance of machine translation[Li,Resnik and Daumé III(2013)].Integrating syntactic information becomes a trend for it has the advantage in capturing information that steps across in a long distance.From this perspective,Li et al.[Li,Xiong,Tu et al.(2017)]linearize a phrase tree into a structural label sequence and utilize another RNN to model these labels.Then the hidden vectors of parse tree labels and source words have been tried to combine in three different ways to improve the translation accuracy of NMT.Wu et al.[Wu,Zhou and Zhang(2017)]increased another two RNN to take advantage of the dependency tree to explicitly model source word.Dependency structures are extracted from the dependency tree in two way to enrich source word.Child Enriched Structure RNN(CES-RNN)that enrich source child nodes with global syntactic information and Head Enriched Structure RNN(HES-RNN)to enrich source head nodes with its child nodes.Therefore,each source node could contain relatively comprehensive information.

    Besides the straightforward way to model syntactic information by sequence network RNN,other classes of neural networks which is more suitable to modeling graph-structured data are also be exploited,as syntactic information is always contained with edges and nodes.In Bastings et al.[Bastings,Titov,Aziz et al.(2017)],they employed graph-convolution network(GCN)on top of a normal encoder network to combined information of dependency trees.GCN is a neural network which contains multiple layers that directly modeling information on the graph,information about syntactic neighborhoods of source words could be directly modeled through this special kind of network.The work Marcheggiani et al.[Marcheggiani and Titov(2017)]also verified GCN is effective for NLP tasks.

    Both above methods modeling syntactic knowledge in the encoder side,however,the decoder side is also very important.As the point raised in Tu et al.[Tu,Liu,Lu et al.(2017)],they find the source contexts impact on translation adequacy while target contexts affect translation fluency.Thence,some works started to focus on improving the decoder side.The method Sequence-to-Dependency NMT(SD-NMT)[Wu,Zhang,Yang et al.(2017)]was proposed to face the challenge.In this method,dependency structure was dynamically constructed in consist with the process of generating target word.Letting a single neural network have the capability of performing target word generation and syntactic structure construction simultaneously.And the resulting dependency tree largely influences the generation of translation at the current moment.

    Since attention mechanism is a weighted part in NMT,Chen et al.[Chen,Huang,Chiang et al.(2017)]applied the source syntax into the attention part to enhance the alignment accuracy.Specifically,the coverage model was employed in their work by added the coverage vector for each node[Tu,Lu,Liu et al.(2016)],along with this method,the child nodes information was adopted in the coverage vector,and then the coverage vector was made use of updating the attention.

    Another kind of attempt for using syntactic knowledge is raised for the consideration that each kind of parse tree generated by parsers contains errors itself.Zaremoodi et al.[Zaremoodi and Haffari(2018)]proposed forest-to-sequence attentional NMT model,based on the tree-to-sequence model method[Eriguchi,Hashimoto and Tsuruoka(2016)],which inherit another RNN to model hierarchical syntactic information.Different from tree-tosequence use only one parse tree,they use packed forests which contains different kinds of parse trees.

    This work also draws on the idea of the Big Data Learning[Han,Tian,Huang et al.(2018)],Data-driven model[Tian,Su,Shi et al.(2019);Qiu,Chai,Liu et al.(2018)],Cloud System[Li,Sun,Jiang et al.(2018)],Internet of Things[Chen,Tian,Cui et al.(2018)].

    3 Background

    In this section,we mainly introduce the following aspects.The knowledge of dependency parsing is briefly introduced in part 3.1.Then we introduce the standard attention-based NMT model proposed by Vinyals et al.[Vinyals,Kaiser,Koo et al.(2015)]in part 3.2.Progressively,the local attention mechanism which improved on standard attention mechanism(global mechanism)is introduced in part 3.3[Luong,Pham and Manning(2015)].Both models consists of an encoder and a decoder.Finally,we introduce a recent work which also explored dependency information,as one of the comparisons of our model.Both models consist of the encoder-decoder framework.

    3.1 Introduction of dependency parsing

    The knowledge of dependency parsing or dependency grammar focuses on the relationship between a word and another word among a sentence.Dependency is a binary asymmetric relation between a central word and its subordinates[Bird,Klein and Loper(2009)].The central word of a sentence is usually taken to be the tensed verb,and all other words either depend on the central word directly or associated with the central word through a dependency path indirectly.

    The dependency parsing graph is usually represented by a labeled directed graph.Among the graph,words are represented as nodes,the dependency relationship between the central word and its subordinate is represented as the tagged arc.For example,as is shown in Fig.1,"root"represents that the central word is"chi"(eating),although"wan"(playing)is another important verb among the sentences,the dependency parsing tool we used correctly tagged their relationship as"conj",which means the verb"chi"and the verb"wan"is two parallel words.The example of the relationship of the central word and its subordinate is"chi"and"pingguo"(apple),the dependency parsing result shows their relationship is the direct object"dobj".

    The more important meaning of dependency parsing is reflected in the two words "pingguo" that arises in the sentence.Among the sentence,the first"pingguo"(apple)means fruit apple,and the second"pinguo"(Apple)represents the name of a company.As we can see in Tab.1,although the two words are identical in character form,while the meaning of them varies greatly,and should not both translate them in"apple".

    Figure1:The example of parsing the sentence"ta yibian chi pingguo,yibian wan pingguo shouji.",the corresponding english is"He is eating an apple while playing an iPhone."

    Table1:A translation example demonstrates the space for NMT to improve

    By adding the dependency constraint,we can see it shows that the second"pingguo"has a direct dependency relationship with"shouji"(cellphone),therefore,in theory,the RNN could learn the difference between the two identical words"pingguo"and give different hidden states to each word.Under these points,the model has a greater chance to translate the second"pingguo"and its following word"shouji"into"iPhone",which is the correct translation.

    Through this example,we can see that although the neural network can automatically learn the characteristics of the translation task,due to the limitations of the corpus or the current neural network architecture,enhancing the semantic information through the dependency syntax can help us train the neural network translation model more accurately.

    In addition,although the dependency parser still makes some mistakes,while the parser which driven by the neural network model has improved the accuracy a lot[Chen and Manning(2014)],so the dependency information could be proficiently used in the translation task.

    3.2 Neural machine translation with standard attention mechanism

    Usually,a source input sentence is firstly tokenized asxj∈(x1,...,xJ)and then each token is embedded as a vectorVxj,as shown in Fig.2.After that,the encoder encodes those source vectors into a sequence of hidden state vectors:

    wherehejis an encoder hidden state vector that generated by a Recurrent Neural Network(RNN)f.Our work used Long Short-Term Memory(LSTM)neural network asf,for instance.

    The decoder is often trained to compute the probability of next target wordytby a softmax layerq:

    among the last two equations,ctis the context vector in terms of current translating,which is computed as a weighted sum of all the encoder hidden states:

    where the alignment weightαtjof each encoder hidden statehejis computed as:

    whereetjis an alignment model which scores how well the inputs around positionjand the output at the current timetmatch:

    wheresis the score function that has different alternatives.

    3.3 Local attention mechanism

    Taking all the encoder hidden state vectors into account when deriving the context vector of the current time is the method of the traditional attention approach,also known as the global attention mechanism.Considering the computational expensive and impractical for global attention to translating long sentences,the theory of local attention mechanism was proposed by Luong et al.[Luong,Pham and Manning(2015)].This theory selectively focuses only on a small subset of the encoder hidden states in terms of per target word,has an advantage of avoiding the expensive computation and training easily than the traditional global attention approach.

    The main implementation idea of local attention mechanism is to select a position within the length of the source sentence before generating each context vector.Centering on this position,set a fixed size window and there will be some source hidden state vectors included in the window.Finally,only the hidden state vectors contain in the window are selected and participate in generating the current context vector.

    This work proposed two methods to select the position and therefore develops two types of local attention approach model.

    One way is monotonic alignment model:This model just simply set the position equal to the current time step for assuming that source and target sequences are roughly monotonically aligned.Another way is predictive alignment model:By means of an independent network,this model learns to predict an alignment position.

    Specifically,in both of the two methods,the context vectorctis now a weighted sum of encoder hidden states which only included within a window[pt-D,pt+D]:

    Dis the half size of the window and is empirically selected.ptequals to current timetwhen using monotonic alignment model.When using predictive alignment model,ptis an aligned position generated by the model according to the following equation:

    whereWpandvpare the hyper parameters to be learned.Sis the length of source sentence.As the result ofsigmoid,pt∈[0,S].

    3.4 Neural machine translation with source dependency representation

    In this section,we will introduce a work[Chen,Wang,Utiyama et al.(2017)]in a more detail way which also exploited dependency information to improve NMT model.Part of our work to improve the NMT model is inspired by this article.And according to the idea of the article,we try our best to reimplement their models as the comparison of our model.The work proposed two types of models:SDRNMT-1(Neural machine translation with source dependency representation)and SDRNMT-2 to exploited the efficient way to use dependency information.

    Different from the previous work that simply combined the labels of dependency information to source sentences,this work uses a relatively complicated way,that is,using an independent neural network to learn the dependency information.The learned information is then combined with the NMT model in different ways.

    The first step is the extraction and organization of dependency information.In their work,a dependency unit was extracted for each source wordxjfrom the dependency tree.The dependency unit is organized as the following:

    whereUjrepresents the dependency unit ofxj;PAxj,SIxj,CHxjdenotes the parent,siblings,children words ofxjrespectively in a sentence tree.

    Then a simple Convolution Neural Network(CNN)was designed to learn the Source Dependency Representation(SDR)for each of the organized dependency units.

    Therefore,compared with the standard attention-based NMT model,the encoder of the two models:SDRNMT-1 and SDRNMT-2 all consist of a convolutional architecture and an RNN.In this way,the large size of dependency units with sparsity issues was tackled and a compositional representation of dependency information was learned.

    The innovation of model SDRNMT-1 is leverage the dependency information(SDR)and the source word embedding vector together to generate the source hidden state vectors:

    whereVUjis the vector denotes of SDR,and":"denotes the operation of vectors concatenation.

    The remaining architecture of SDRNMT-1 is the same as the standard attention-based NMT model.

    Unlike model SDRNMT-1, which only uses dependency information on the encoder side,model SDRNMT-2 makes dependency information participate in various parts of the encoderdecoder framework.

    Instead of concatenating source word embedding and SDR together,SDRNMT-2 let SDR be an independent part to generate its own hidden state vectors:

    whereqis the independent RNN to learn SDR hidden state vectors.SDRNMT-2 also generate the separate context vectors for SDR hidden states:

    at the same time,the context vector of source hidden states is:

    Now the current target hidden state vectors of dependency information and source word is computed as:

    Now,the work arranged the probability of next target word is:

    Figure2:NMT with local dependency attention approach

    4 Organizing dependency information

    Inspired by Chen et al.[Chen,Huang,Chiang et al.(2017)],we introduced above,which exploiting dependency information from the dependency tree as a unit to increases extra information for each source word,we organized our dependency information unitLjfor each source word as the following:

    different with the way directly organizing dependency words itself as a unit,we record the location of the words in a sentence and organize it as a unit.xjis one of the source token,PAxj,SIxj,CHxjdenotes the parent,siblings,children words ofxjrespectively in a sentence tree.Lxjrepresents the location ofxjitself,whereLPAxj,LCHxj,LSIxjdenotes the location of parent,children and siblings words ofxjrespectively.Takex5in Fig.2 as an example,the solid box representsL5:Lx5=〈5〉,LPAxj=〈7〉,LCHx2=〈1,3〉,LSIx2=〈6,8〉,that is,L2=〈5,7,1,3,6,8〉.Empirically,we constrained the number of location information in a unit is ten,which adopt nine dependency words of a source token.Specifically,most token contains no more than nine dependency words,we have tried to padLjwhich shorter than ten with"/",but experiment results show that this way of padding is insufficient and computational wasteful.Therefore,we padded the spare information inLjwith the location of the words which aroundxj.

    5 Neural machine translation with dependency-based local attention approach

    In order to address the potential issues which narrated in section one,we proposed DLAA(dependency-based local attention approach).A new type of attention approach to enhance NMT.

    In our model,the encoder part and the decoder part are same with the traditional standard attentional NMT,which,implemented by RNNs.However,the inputs of the encoder,in addition to the tokens of source sentence,its corresponding location information unit of each token is also included,as shown in Fig.2.After the tokens of source inputs are embedded and represent as the encoder hidden states,dependency blocks of each token were generated,by using the location information that contains inLj.In detail,the dependency block is defined as the following:

    among the equation,hejis the encoder hidden state ofxjitself,is the encoder hidden states of parent ofxj.Similar tois the corresponding encoder hidden states ofxj.

    After generating the dependency blocks, one of them was selected in line with the generated aligned positionpt,according to the Eq.(8).

    The theory of local attention mechanism chooses to focus only on a small subset encoder hidden states during the attention compute process,while DLAA chooses to focuses on those encoder hidden states that contain in the dependency block:

    Compared with local attention mechanism only focuses a fixed subset of encoder hidden states around the choose position,DLAA chooses those encoder hidden states that have semantic relationships with current chooses position,in this way,information has a relationship with current time but distance long could also be captured.

    6 Experiment

    6.1 Setting up

    We carried out our experiments on Chinese-to-English translation and conducted three sets of experiments respectively.The datasets are both extracted from WMT17 translation task shared corpora.Experiment one included 0.23 M training sentence pairs extracted from news-commentary[Bojar,Chatterjee,Federmann et al.(2017)].The validation dataset and test datasets are extracted from the corpus as well.Training dataset of experiment two and experiment three both extracted from The United Nations Parallel corpus[Bojar,Chatterjee,Federmann et al.(2017)],included 0.9 M and 2 M sentence pairs separately.Their validation dataset and test datasets are extracted from the corpus itself as well.Specifically,we group our test dataset by sentence length.For example,"30"indicates that the length of the sentences is between 20 and 30.Each group of the test dataset contains a thousand sentences except the group"70’and"80",for long sentences in such a length is rear in the corpus.The dependency tree for each Chinese sentence is generated by the Stanford CoreNLP[Manning,Surdeanu,Bauer et al.(2014)].The processing speed in 6 G memory is about 0.3 M per hour.Translation quality was evaluated by case-insensitive BLEU-4[Papineni,Roukos,Ward et al.(2002)]metric.

    We use the sequence to sequence model implemented by the NMT tutorial Luong,Brevdo and Zhao(2017)of Tensorflow,with its default settings as one of our baseline system.

    Other models used as comparative experiments are local predictive alignment model,SDRNMT-1 and SDRNMT-2.In order to be consistent with the number of dependency word that set,SDRNMT-1 and SDRNMT-2 both retain 10 dependency word.The window size of the local predictive alignment model is also 10.

    We have tried our best to re-implement model SDRNMT-1 and SDRNMT-2.Since the platform we re-implement on is TensorFlow,the implementation of the convolutional neural network is slightly different with the original.

    6.2 Training

    The Chinese and English vocabularies are all limited in 40 K for our model and the baseline model.Other words are replaced by the special symbol"UNK".The maximum training length of Chinese sentences is 40 dues to the equipment limitation.

    For the comparison of traditional standard attention-based NMT model and Dependencybased local attention approach NMT model,each RNN layer contains 1024 hidden units.The word embeddings are 1024 dimensional.A batch of size 64 stochastic gradient descent(SGD)was used to train the networks.

    Due to the limitations of experimental conditions,for the comparison of DLAA NMT model and local predictive alignment model,SDRNMT-1,SDRNMT-2,each RNN layer contains 620 hidden units.The word embeddings are 620 dimensional.A batch of size 32 stochastic gradient descent(SGD)was used to train the networks.

    6.3 Results and analyses

    Tab.2,Tab.3 and Tab.4 lists the results conducted on the three datasets.From the average indicator,we observe that our approach indeed improves the translation quality of the tradition attentional NMT system.This indicates that our way of using dependency information is effective and in the right way.However,as shown in both tables,our approach performs not ideal on short sentences,we consume that the reduced hidden states for attention mechanism to attend hurt the performance of NMT when translating short sentences.But for long sentences,the reduced information guided by dependency information is still effective for improving the performance of NMT.On the other hand,the results also certificate that sufficient source context is important for the NMT system.

    Table2:Experiment on 0.23 M traning dataset

    Table3:Experiment on 0.9 M traning dataset

    Table4:Experiment on 2 M traning dataset

    Tab.5 shows the comparison results of model DLAA NMT,Predictive alignment NMT,SDRNMT-1,SDRNMT-2 carried on training dataset 2 M.Although we have tried our best to re-implement SDRNMT-1 and SDRNMT-2,it shows less effective than our model and the local predictive alignment model,perhaps for the reason that we didn’t use the training technique such as"dropout"to make the model achieve its best states.Although our model did not show excess performance than the local predictive alignment model,it also shows its competitiveness in translate long sentences.

    Table5:Comparison experiments on 2 M traning dataset of DLAA,local predictive alignment model(local_P),SDRNMT-1,SDRNMT-2

    Figure3:Part perplexity performance of NMT with LDAA and conventional NMT,the difference shows that NMT with LDAA learns more effective information to train the NMT model

    6.4 Analyses of perplexity

    Perplexity is a commonly used evaluation indicator of the language model.In simple terms,the language model is a model used to calculate the probability of a sentence,that is,the probability of adjudicating whether a sentence belongs to human language habits.For example,a given sentence is represented as:

    wheres1,s2,...,is the words consisted of the sentences.The probability of the sentence can be expressed as:

    Given that the previous M words, the conditional probability of the M+1th word is modeled,that is,we hope that the language model could predict the M+1th word.

    The basic idea of Perplexity is,the higher the probability value that the language model given to the sentences on test dataset,the better the model is.As we know,the sentences on the test dataset are both normal sentences.The formula of Perplexity is as follows:

    Know by the formula,the smaller the perplexity is,the better the language model to generate sentences with high probability.

    Fig.3 has shown the part perplexity during the training,the NMT model equipped with DLAA has a much smaller perplexity value than the normal NMT model at the very beginning,which indicates that our model is very protentional in modeling the language model in a fast and efficient way.Although at each subsequent step,our perplexity values converge a little slower than the normal NMT.But finally,they arrived the same converge value.

    7 Conclusion and future work

    In this paper,we proposed a new attention approach DLAA to improve the translation performance of the NMT system based on the theory of local attention mechanism.Syntactic knowledge dependency information was used to mine deep relationships between words in a sentence to insurance the translation quality.Experiments on Chinese-to-English translation tasks show that our approach is effective and improve the translation performance of the conventional NMT system.While for the problems presented in our experiments need a further exploration.We will also compare our work with newer NMT models.

    As syntactic knowledge has been proved to be useful in traditional statistical machine translation,we believe it could also help to improve NMT.A lot of works has proved so.In the further,we plan to explore more efficient ways to use syntactic knowledge and fix the problems represented in current work.

    欧美+亚洲+日韩+国产| 久久热在线av| 精品熟女少妇八av免费久了| 91久久精品国产一区二区成人 | 午夜福利在线观看吧| 亚洲欧美一区二区三区黑人| 色哟哟哟哟哟哟| 午夜久久久久精精品| 精品免费久久久久久久清纯| 久久亚洲精品不卡| 在线a可以看的网站| 在线a可以看的网站| 少妇熟女aⅴ在线视频| 97超级碰碰碰精品色视频在线观看| 成人性生交大片免费视频hd| 18美女黄网站色大片免费观看| 国产精品久久久久久精品电影| 丰满人妻一区二区三区视频av | 亚洲国产中文字幕在线视频| 久久午夜亚洲精品久久| 亚洲性夜色夜夜综合| 丰满的人妻完整版| 国产精品,欧美在线| 久久精品国产清高在天天线| 色噜噜av男人的天堂激情| 老司机在亚洲福利影院| 国产乱人伦免费视频| 久久久久国产精品人妻aⅴ院| 国产 一区 欧美 日韩| 成人三级做爰电影| 日本三级黄在线观看| 精品一区二区三区视频在线 | 亚洲精品久久国产高清桃花| 黄片大片在线免费观看| 麻豆国产97在线/欧美| 久久久久亚洲av毛片大全| av国产免费在线观看| 国产高清视频在线播放一区| 亚洲成av人片在线播放无| 99热6这里只有精品| 老司机午夜十八禁免费视频| 亚洲av免费在线观看| ponron亚洲| 日韩精品中文字幕看吧| 国产成人av激情在线播放| 久久这里只有精品19| 亚洲欧美一区二区三区黑人| 天堂动漫精品| 日本与韩国留学比较| 老熟妇乱子伦视频在线观看| 免费在线观看日本一区| 国产单亲对白刺激| 国产亚洲精品久久久com| 在线观看美女被高潮喷水网站 | 色播亚洲综合网| 极品教师在线免费播放| 人妻夜夜爽99麻豆av| 99久国产av精品| 亚洲成av人片免费观看| 一个人免费在线观看电影 | 久久精品国产99精品国产亚洲性色| 国产成人影院久久av| 亚洲九九香蕉| 黄片小视频在线播放| av国产免费在线观看| av在线蜜桃| 亚洲精品一卡2卡三卡4卡5卡| 国产伦精品一区二区三区四那| 免费在线观看成人毛片| 欧美成人一区二区免费高清观看 | 国内精品一区二区在线观看| 亚洲欧洲精品一区二区精品久久久| 亚洲欧美日韩高清在线视频| 少妇的丰满在线观看| 色播亚洲综合网| 18禁国产床啪视频网站| 巨乳人妻的诱惑在线观看| 香蕉久久夜色| 精品一区二区三区av网在线观看| 欧美中文日本在线观看视频| 亚洲人与动物交配视频| 国产精品久久视频播放| 国产精品av久久久久免费| 亚洲精品色激情综合| 久久这里只有精品中国| 波多野结衣高清无吗| 国产午夜福利久久久久久| 免费av不卡在线播放| av福利片在线观看| 国产精品98久久久久久宅男小说| 小说图片视频综合网站| 日本黄色视频三级网站网址| 每晚都被弄得嗷嗷叫到高潮| 欧美av亚洲av综合av国产av| 国产爱豆传媒在线观看| 午夜免费成人在线视频| 国产91精品成人一区二区三区| 国产精品爽爽va在线观看网站| 天天添夜夜摸| 草草在线视频免费看| www日本黄色视频网| 国产一区二区在线观看日韩 | 国产激情久久老熟女| 久久伊人香网站| av黄色大香蕉| 99久久成人亚洲精品观看| 国产精品永久免费网站| www.熟女人妻精品国产| 欧美极品一区二区三区四区| 亚洲avbb在线观看| 精品久久久久久成人av| 成熟少妇高潮喷水视频| 日本三级黄在线观看| 级片在线观看| 国内精品久久久久精免费| 伦理电影免费视频| 欧美成人免费av一区二区三区| 成人性生交大片免费视频hd| 日日干狠狠操夜夜爽| 国产成年人精品一区二区| 他把我摸到了高潮在线观看| 亚洲国产精品sss在线观看| 午夜影院日韩av| 亚洲精品一区av在线观看| 国产成人系列免费观看| 免费看a级黄色片| 亚洲精品一区av在线观看| 亚洲真实伦在线观看| 亚洲真实伦在线观看| 一级a爱片免费观看的视频| 久久久国产精品麻豆| 老鸭窝网址在线观看| 亚洲av第一区精品v没综合| 精品国产乱码久久久久久男人| 国产一区二区三区在线臀色熟女| a级毛片a级免费在线| 国产精品自产拍在线观看55亚洲| 综合色av麻豆| 男人和女人高潮做爰伦理| 午夜免费激情av| 午夜福利在线观看吧| а√天堂www在线а√下载| 最近视频中文字幕2019在线8| 九色成人免费人妻av| 国产精品免费一区二区三区在线| 男女做爰动态图高潮gif福利片| 国产成人系列免费观看| 美女大奶头视频| 人人妻,人人澡人人爽秒播| 亚洲 欧美 日韩 在线 免费| 欧美高清成人免费视频www| 99久久综合精品五月天人人| 狂野欧美激情性xxxx| 久久精品国产99精品国产亚洲性色| 日本三级黄在线观看| 亚洲人成网站高清观看| 久久热在线av| 夜夜躁狠狠躁天天躁| 精品久久久久久久人妻蜜臀av| 午夜成年电影在线免费观看| 伦理电影免费视频| 级片在线观看| 国产在线精品亚洲第一网站| xxx96com| 99精品在免费线老司机午夜| 怎么达到女性高潮| 窝窝影院91人妻| 国产又黄又爽又无遮挡在线| 99热这里只有是精品50| 国产精品亚洲av一区麻豆| 非洲黑人性xxxx精品又粗又长| 男女视频在线观看网站免费| 成人三级黄色视频| 久久久久久人人人人人| 999久久久国产精品视频| 亚洲精品在线美女| 国产精品一区二区精品视频观看| 看免费av毛片| 国产精品 欧美亚洲| 一进一出抽搐动态| 桃色一区二区三区在线观看| 老司机福利观看| 亚洲第一电影网av| 国产1区2区3区精品| av福利片在线观看| 九九久久精品国产亚洲av麻豆 | 色哟哟哟哟哟哟| 岛国在线观看网站| 怎么达到女性高潮| 天堂av国产一区二区熟女人妻| 91在线观看av| 久久草成人影院| 观看美女的网站| 三级国产精品欧美在线观看 | 成人永久免费在线观看视频| 手机成人av网站| 真人一进一出gif抽搐免费| 90打野战视频偷拍视频| 国产精品影院久久| 看黄色毛片网站| 亚洲中文日韩欧美视频| 首页视频小说图片口味搜索| 亚洲国产欧美人成| 亚洲av免费在线观看| 久久久久久国产a免费观看| 精品乱码久久久久久99久播| 嫩草影院精品99| 亚洲av电影在线进入| 五月玫瑰六月丁香| 日韩欧美三级三区| 国产野战对白在线观看| 国产伦在线观看视频一区| 中文资源天堂在线| 亚洲午夜精品一区,二区,三区| 动漫黄色视频在线观看| av中文乱码字幕在线| 嫩草影视91久久| 亚洲 国产 在线| 国产三级中文精品| 精品久久久久久,| 老熟妇仑乱视频hdxx| 精品国产亚洲在线| 嫩草影视91久久| 精品乱码久久久久久99久播| 成在线人永久免费视频| 狠狠狠狠99中文字幕| 亚洲五月天丁香| 国产成人av教育| 午夜精品在线福利| 欧美激情在线99| 亚洲精华国产精华精| 国产成人影院久久av| 真人一进一出gif抽搐免费| 日韩欧美在线二视频| 天堂网av新在线| 两个人看的免费小视频| 国产欧美日韩精品一区二区| 久久久久久久久免费视频了| 香蕉久久夜色| av天堂中文字幕网| 成人永久免费在线观看视频| 亚洲最大成人中文| 日韩欧美在线乱码| 免费av不卡在线播放| av在线蜜桃| 精品国产美女av久久久久小说| 人人妻人人澡欧美一区二区| 国产精品98久久久久久宅男小说| www日本在线高清视频| 两性午夜刺激爽爽歪歪视频在线观看| 欧美日韩瑟瑟在线播放| 最近最新中文字幕大全电影3| 国产精品野战在线观看| 亚洲av片天天在线观看| 亚洲av五月六月丁香网| 男人舔女人的私密视频| 午夜a级毛片| 色噜噜av男人的天堂激情| 日韩欧美国产一区二区入口| 久久国产乱子伦精品免费另类| 草草在线视频免费看| 亚洲精品色激情综合| 午夜亚洲福利在线播放| 欧美xxxx黑人xx丫x性爽| 97超级碰碰碰精品色视频在线观看| 国产亚洲欧美98| 真实男女啪啪啪动态图| 色综合婷婷激情| 小说图片视频综合网站| 男人舔女人的私密视频| 亚洲成人精品中文字幕电影| 久久久国产欧美日韩av| 久久精品国产清高在天天线| 亚洲熟妇中文字幕五十中出| 噜噜噜噜噜久久久久久91| 69av精品久久久久久| 久久九九热精品免费| 热99re8久久精品国产| 淫妇啪啪啪对白视频| 久久久久久久久中文| 老熟妇乱子伦视频在线观看| 大型黄色视频在线免费观看| 女人高潮潮喷娇喘18禁视频| 国产欧美日韩一区二区三| 国产成人精品久久二区二区91| 久久性视频一级片| 精品日产1卡2卡| 色综合婷婷激情| www.自偷自拍.com| 免费av毛片视频| 欧美激情在线99| 又爽又黄无遮挡网站| 女同久久另类99精品国产91| 久久人人精品亚洲av| 国产精品自产拍在线观看55亚洲| 久久久久九九精品影院| 国产aⅴ精品一区二区三区波| 国产精品乱码一区二三区的特点| 国产亚洲精品久久久久久毛片| 无遮挡黄片免费观看| 亚洲欧美日韩无卡精品| 亚洲自拍偷在线| 国产精品九九99| 色播亚洲综合网| 欧美日韩福利视频一区二区| 亚洲成人久久爱视频| 精品久久久久久久久久久久久| 成人欧美大片| 欧美精品啪啪一区二区三区| 国语自产精品视频在线第100页| 午夜福利高清视频| 一级a爱片免费观看的视频| 偷拍熟女少妇极品色| 91av网站免费观看| 免费人成视频x8x8入口观看| 这个男人来自地球电影免费观看| 免费观看的影片在线观看| 亚洲成a人片在线一区二区| 蜜桃久久精品国产亚洲av| 男女那种视频在线观看| 久久久久性生活片| 一级作爱视频免费观看| 最近最新免费中文字幕在线| 一个人观看的视频www高清免费观看 | 国产一区二区在线观看日韩 | 成人三级做爰电影| 性欧美人与动物交配| av片东京热男人的天堂| 伦理电影免费视频| 丁香六月欧美| 久久久久国产精品人妻aⅴ院| 99久久精品一区二区三区| 欧美日韩瑟瑟在线播放| 亚洲 欧美 日韩 在线 免费| 国产av在哪里看| 中文字幕熟女人妻在线| 亚洲色图 男人天堂 中文字幕| 在线免费观看不下载黄p国产 | 国产高清视频在线播放一区| 久久精品人妻少妇| 天堂√8在线中文| 一个人观看的视频www高清免费观看 | 中文字幕最新亚洲高清| 美女被艹到高潮喷水动态| 亚洲欧美精品综合一区二区三区| 中文字幕人成人乱码亚洲影| 天天添夜夜摸| 免费在线观看成人毛片| 国产在线精品亚洲第一网站| 伦理电影免费视频| 日本一本二区三区精品| 久久国产精品人妻蜜桃| 欧美绝顶高潮抽搐喷水| 99国产极品粉嫩在线观看| 一区福利在线观看| 国产黄色小视频在线观看| 桃色一区二区三区在线观看| 日日摸夜夜添夜夜添小说| 动漫黄色视频在线观看| 国产精品自产拍在线观看55亚洲| 成人性生交大片免费视频hd| 最近最新中文字幕大全电影3| 久久久久国产精品人妻aⅴ院| 欧美日韩综合久久久久久 | 国产高清视频在线播放一区| 亚洲精品在线观看二区| 国产一区二区三区在线臀色熟女| 一本一本综合久久| 国产av一区在线观看免费| 一个人看的www免费观看视频| 久久国产乱子伦精品免费另类| 俺也久久电影网| 又爽又黄无遮挡网站| 久久热在线av| 久久精品国产综合久久久| 九九在线视频观看精品| 人妻夜夜爽99麻豆av| 久久精品国产亚洲av香蕉五月| 午夜免费激情av| 国产精品久久电影中文字幕| 国产成人av激情在线播放| www日本在线高清视频| 又粗又爽又猛毛片免费看| 波多野结衣高清作品| 久久久久性生活片| 看黄色毛片网站| 嫩草影视91久久| 18禁美女被吸乳视频| 夜夜夜夜夜久久久久| 在线国产一区二区在线| 成人国产一区最新在线观看| 国产精品精品国产色婷婷| 日本撒尿小便嘘嘘汇集6| 国产免费男女视频| 亚洲成人精品中文字幕电影| 一个人观看的视频www高清免费观看 | 午夜两性在线视频| 久久久国产欧美日韩av| www.999成人在线观看| 真人一进一出gif抽搐免费| aaaaa片日本免费| 日韩欧美国产在线观看| 成人精品一区二区免费| 国产精品98久久久久久宅男小说| 国产亚洲av高清不卡| 亚洲真实伦在线观看| 在线免费观看的www视频| 精品熟女少妇八av免费久了| 欧美日韩亚洲国产一区二区在线观看| 久久久久久久久中文| 在线十欧美十亚洲十日本专区| 国产精品久久久久久亚洲av鲁大| 日韩欧美三级三区| or卡值多少钱| 国产成人一区二区三区免费视频网站| 国产精品 欧美亚洲| 国产精品99久久久久久久久| 少妇丰满av| 欧美一区二区国产精品久久精品| 中亚洲国语对白在线视频| xxxwww97欧美| 国产免费男女视频| 国产亚洲精品一区二区www| 国语自产精品视频在线第100页| 97人妻精品一区二区三区麻豆| 俄罗斯特黄特色一大片| 欧美黄色淫秽网站| 小蜜桃在线观看免费完整版高清| 午夜福利在线观看免费完整高清在 | 在线观看一区二区三区| 亚洲美女视频黄频| 色精品久久人妻99蜜桃| 国产淫片久久久久久久久 | 在线看三级毛片| 蜜桃久久精品国产亚洲av| 黄色日韩在线| 亚洲专区国产一区二区| 午夜福利在线在线| 人妻夜夜爽99麻豆av| 国产视频一区二区在线看| 国产精品乱码一区二三区的特点| 午夜福利欧美成人| 成人无遮挡网站| 男人舔女人下体高潮全视频| 国内毛片毛片毛片毛片毛片| 18禁黄网站禁片午夜丰满| 免费观看精品视频网站| 亚洲 欧美 日韩 在线 免费| 大型黄色视频在线免费观看| 亚洲五月天丁香| 成人18禁在线播放| 欧美激情久久久久久爽电影| 一级a爱片免费观看的视频| 午夜精品在线福利| 久久久国产成人精品二区| 岛国在线观看网站| 亚洲美女视频黄频| 操出白浆在线播放| 最新在线观看一区二区三区| 亚洲自偷自拍图片 自拍| 亚洲专区国产一区二区| 午夜免费观看网址| 岛国在线免费视频观看| 国产亚洲精品综合一区在线观看| 999久久久精品免费观看国产| 亚洲精品美女久久av网站| 国产人伦9x9x在线观看| 国产综合懂色| 美女大奶头视频| bbb黄色大片| 又紧又爽又黄一区二区| 搡老岳熟女国产| 很黄的视频免费| 久久久久久人人人人人| 波多野结衣高清无吗| 精品免费久久久久久久清纯| 亚洲人成网站高清观看| 国产毛片a区久久久久| 日本 av在线| 免费看十八禁软件| 欧美乱色亚洲激情| 国产高潮美女av| 精品免费久久久久久久清纯| 久久久久性生活片| 国产1区2区3区精品| h日本视频在线播放| 久久精品91蜜桃| 麻豆国产97在线/欧美| 亚洲色图 男人天堂 中文字幕| 亚洲 欧美一区二区三区| 变态另类成人亚洲欧美熟女| 色噜噜av男人的天堂激情| 黄频高清免费视频| 中文在线观看免费www的网站| 久久人人精品亚洲av| 精品免费久久久久久久清纯| 色尼玛亚洲综合影院| 51午夜福利影视在线观看| 黄色日韩在线| 男女那种视频在线观看| 非洲黑人性xxxx精品又粗又长| www.精华液| 久久精品亚洲精品国产色婷小说| 校园春色视频在线观看| 最新美女视频免费是黄的| 亚洲av片天天在线观看| 在线观看66精品国产| 99久久精品热视频| 午夜亚洲福利在线播放| 99久久国产精品久久久| 首页视频小说图片口味搜索| 欧美乱码精品一区二区三区| 一区二区三区激情视频| 九九久久精品国产亚洲av麻豆 | 性色avwww在线观看| 亚洲第一欧美日韩一区二区三区| 怎么达到女性高潮| 日韩人妻高清精品专区| 久久久久久大精品| 男人的好看免费观看在线视频| 啦啦啦观看免费观看视频高清| 三级男女做爰猛烈吃奶摸视频| 97碰自拍视频| 国产成年人精品一区二区| 日本精品一区二区三区蜜桃| 久久午夜亚洲精品久久| 亚洲精品粉嫩美女一区| 久久久国产成人免费| 曰老女人黄片| 特级一级黄色大片| 国产精品综合久久久久久久免费| 97超视频在线观看视频| 亚洲av熟女| 成人性生交大片免费视频hd| 最近最新中文字幕大全电影3| 变态另类成人亚洲欧美熟女| 久久国产乱子伦精品免费另类| 中文字幕熟女人妻在线| 日韩有码中文字幕| 啦啦啦观看免费观看视频高清| 亚洲成人久久性| 欧美黄色片欧美黄色片| 国产欧美日韩一区二区精品| 欧美国产日韩亚洲一区| 性色av乱码一区二区三区2| 美女高潮的动态| 日韩欧美在线乱码| 国产高清视频在线播放一区| 美女黄网站色视频| 欧美zozozo另类| aaaaa片日本免费| 婷婷精品国产亚洲av| 黄色日韩在线| 一二三四在线观看免费中文在| 九九久久精品国产亚洲av麻豆 | 国产精品av视频在线免费观看| 亚洲国产欧美人成| 国产私拍福利视频在线观看| 精品国产三级普通话版| 成人午夜高清在线视频| 美女高潮的动态| 熟妇人妻久久中文字幕3abv| 91av网一区二区| 亚洲欧美一区二区三区黑人| 免费看日本二区| 国产精品一区二区三区四区免费观看 | 三级毛片av免费| 国产又黄又爽又无遮挡在线| 老司机在亚洲福利影院| 亚洲成人精品中文字幕电影| 一夜夜www| 国产精品久久电影中文字幕| 黄色成人免费大全| 老汉色av国产亚洲站长工具| 中出人妻视频一区二区| 国产成+人综合+亚洲专区| www.自偷自拍.com| 亚洲精品中文字幕一二三四区| 一级毛片女人18水好多| 亚洲美女黄片视频| 69av精品久久久久久| 欧美日本视频| 亚洲熟女毛片儿| 老汉色av国产亚洲站长工具| 中文字幕精品亚洲无线码一区| 国产欧美日韩精品一区二区| 国产精品综合久久久久久久免费| 欧美性猛交黑人性爽| www国产在线视频色| 免费看光身美女| 国产私拍福利视频在线观看| 午夜精品在线福利| 99精品在免费线老司机午夜| 一个人看的www免费观看视频| 国产高清激情床上av| 精品一区二区三区视频在线 | 757午夜福利合集在线观看| 在线a可以看的网站| 久久草成人影院| 亚洲 欧美 日韩 在线 免费| 18禁美女被吸乳视频| 国产成人啪精品午夜网站| 啦啦啦观看免费观看视频高清| 久久久久国产精品人妻aⅴ院| 亚洲成人免费电影在线观看| 在线十欧美十亚洲十日本专区| 亚洲欧美精品综合一区二区三区| 日韩中文字幕欧美一区二区| 伦理电影免费视频| 老汉色∧v一级毛片| 99国产精品一区二区蜜桃av| 久久午夜综合久久蜜桃| 欧美日韩国产亚洲二区| 日本与韩国留学比较| 国产av不卡久久|