• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improving sentence simplification model with ordered neurons network

    2022-05-28 15:17:36ChunhuiDengLeminZhangHuifangDeng

    Chunhui Deng|Lemin Zhang|Huifang Deng

    1School of Computer Engineering,Guangzhou College of South China University of Technology,Guangzhou,China 2School of Computer Science and Engineering,South China University of Technology,Guangzhou,China

    Abstract Sentence simplification is an essential task in natural language processing and aims to simplify complex sentences while retaining their primary meanings.To date,the main research works on sentence simplification models have been based on sequence-tosequence (Seq2Seq) models.However,these Seq2Seq models are incapable of analysing the hierarchical structure of sentences,which is of great significance for sentence simplification.The problem can be addressed with an ON-MULTI-STAGE model constructed based on the improved MULTI-STAGE encoder model.In this model,an ordered neurons network is introduced and can provide sentence-level structural information for the encoder and decoder.A weak attention connection method is then employed to make the decoder use the sentence-level structural details.Experimental results on two open data sets demonstrated that the constructed model outperforms the state-of-the-art baseline models in sentence simplification.

    1|INTRODUCTION

    Sentence simplification is an essential text-generation task in the field of natural language processing (NLP).It aims to simplify the linguistic complexity of the source sentence while retaining the main idea of the sentence and has many applications in practice.For example,it can help people with lowliteracy skills acquire information effectively [1,2] and improve the performance of other NLP tasks,such as text summarization[3]etc.

    Traditional sentence simplification models focus primarily on paraphrasing words [4-6],deleting unimportant phrases/words[7-9],or dividing a long sentence into shorter sentences[2,10].But such models cannot be trained end-to-end and depend too much on artificial rules.Inspired by the great success of sequence-to-sequence(Seq2Seq)models in machine translation tasks [11,12],recent works [13-15] have constructed similar Seq2Seq sentence simplification models along with attention mechanisms.Zhang et al.[13]further improved the Seq2Seq model with reinforcement-based policy gradient approaches.Vu et al.[14]improved the encoder architecture of the Seq2Seq model with augmented memory capacities called neural semantic encoders (NSEs).Zhang and Deng [15]modified the encoder with a multi-stage encoder to further improve the Seq2Seq model.However,these models still have several deficiencies:(1) Their network structures [13-15]cannot make use of the neuron-ordered information;(2)Their encoder and decoder [13-15] fail in considering the hierarchical structural information of the sentence,which is of great significance in sentence simplification.

    To address the above problems and follow the work of[15],we have proposed anON‐MULTI‐STAGEmodel based on the improvedMULTI‐STAGEencoder model [15] by introducing an ordered neurons (ON) network [16].The proposed model goes beyond conventional long short-term memory (LSTM)/gated recurrent units (GRUs) by introducing the ON network structure to optimize the encoder of theMULTI‐STAGEmodel.The ON network can express the hierarchical structural information of sentence with the neuron-ordered information.This network architecture has been shown to be effective in language modelling task [16].

    The encoder of the proposed model works in three stages:N-gram reading,glance-over,and final encoding stages.The N-gram reading stage extracts the N-gram grammatically context-related convolutional word vector.The glance-over stage uses the ON network to catch the hierarchical structural information of the sentence.The final encoding stage takes advantage of the convolutional word vector and the hierarchical structural information of the sentence to encode the source sentence more effectively.In addition,the proposed model employs a weak attention connection mechanism to facilitate its decoder to use the hierarchical structural information of the sentence.To the best of our knowledge,the ONMULTI-STAGE model is the first Seq2Seq model to use ON networks and the first model to use the ON network for sentence simplification.

    The proposed model is first evaluated on two open data sets,Newsela [17] and WikiLarge [13].The experimental results show that the proposedON‐MULTI‐STAGEmodel is significantly better than other benchmark models.Then the explorative experiments are conducted inside the model.Finally,the comparison is made on the outputs of the proposed model with theMULTI‐STAGEmodel [15].The comparison also shows that the proposed model is more effective on sentence simplification.

    The rest of this paper is organized as follows:Section 2 provides a review of related works.Section 3 presents the proposedON‐MULTI‐STAGEmodel.Section 4 shows experimental results and comparisons with benchmark models.The last section concludes this paper.

    2|RELATED WORKS

    In the past,traditional sentence simplification models focused mainly on simplifying syntactically complex structures,substituting complex words with simpler ones,or deleting unimportant parts of source sentences.These models usually focus solely on individual aspects of the sentence simplification problem.Some models conduct only syntactic simplification [4,18],while others deal with lexical simplification solely by replacing rare words with simpler WordNet synonyms [5,6].

    Recently,some works have regarded the sentence simplification task as a monolingual text-to-text generation task.These works use models based on statistical machine translation to solve for sentence simplification.For example,Woodsend et al.[19] used the quasi-synchronous grammar framework to formulate sentence simplification and turned to linear programming to score candidate simplifications.Wubben et al.[20] proposed a two-stage model.In the first stage,they trained a phrase-based machine translation model with complex-simple sentence pairs.In the second stage,they reranked the topK outputs of the machine translation model according to the dissimilarity of outputs to the source sentence.Following the work of [20],Narayan and Gardent [21] proposed a hybrid model.In this model,a Boxer [22] based probabilistic module first does sentence segmentation and deletion operations over discourse representation structures.Next,the output sentences are further simplified by the model similar to [20].

    With the development of deep learning and neural network,neural Seq2Seq models have been successfully applied in many Seq2Seq generation works,such as machine translation [23,24] and summarization [25].Inspired by the success of Seq2Seq models in these NLP applications,Zhang et al.[13] proposed a deep reinforcement learning sentence simplification model (DRESS) that integrates the attentionbased Seq2Seq model with reinforcement learning for the reward of simpler outputs.They then further improved the model with lexical simplification and proposed the DRESS-LS model.Vu et al.[14] proposed to use a memory-augmented recurrent neural network architecture,called NSEs [26] to improve the encoder of conventional LSTM-or GRU-based Seq2Seq models.Zhang and Deng [15] went beyond the conventional single-stage encoder-based Seq2Seq models and proposed a multi-stage encoder Seq2Seq model (MULTISTAGE).In addition,based on a pointer-generator [27]Seq2Seq model,Guo et al.[28] further improved its text entailment and paraphrasing capabilities with multi-task learning.But this multi-task learning method heavily relies on data sets other than sentence simplification tasks.Nishihara et al.[29] proposed a controllable sentence simplification model incorporating a lexical constraint loss into the Seq2-Seq model.But their model depends heavily on data set constituents because the model needs the sentence -level label of the target sentence.These Seq2Seq models have achieved a certain degree of good results in the sentence simplification task.

    However,the network structures of these conventional Seq2Seq models cannot take advantage of neuron-ordered information.And the encoder and decoder of these models do not consider the hierarchical structural information of the sentence that is of great significance for sentence simplification.To solve these problems,we have proposed anON‐MULTI‐STAGEmodel.

    3|METHODOLOGY

    An ON-MULTI-STAGE model is proposed.The proposed model goes beyond the conventional neural network structures and uses an ON network [16] structure to optimize the encoder of the improved MULTI-STAGE model [15].To the best of our knowledge,the ON-MULTI-STAGE model is the first Seq2Seq model that uses an ON network and the first model that uses the ON network for sentence simplification.In this section,we first introduce the concept of an ON network,and then present the ON-MULTI-STAGE model.

    3.1|Ordered neurons

    For conventional neural networks,such as LSTM,their neurons are usually disordered.Therefore,these networks cannot express the hierarchical structural information of sentence with the neuron-ordered information.Unlike these neural networks,the ordered neurons long short-term memory network(ONLSTM) [16] can extract the hierarchical structural information of sentences by using the ordered information of neurons.Given an example sentence like ‘Jay comes from China’,the hierarchical structure tree of this sentence is shown in Figure 1.

    In the ON network,the neuron-ordered information Figure 2 is used to provide the hierarchical structure of the sentence(Figure 1).As shown in Figure 2,when processing the sentence,the information contained in different dimensions of the ONs should correspond to the hierarchical structure tree of the sentence.For example,the low-dimensional neurons(e.g.,the word-level neurons in Figure 2)mainly express wordlevel information (e.g.,the leaf node of the tree in Figure 1),and the higher dimensional neurons (e.g.,the phrase-level or sentence-level neurons in Figure 2)mainly represent the phrase or root sentence-level information(e.g.,the phrase-level node or root of the tree in Figure 1).This is precisely because the ON network can provide the hierarchical structural details of the sentence through the ordered information of neurons;the neurons of the ON network are ordered,which is different from the neurons of conventional neural networks.

    3.2|ON‐MULTI‐STAGE model

    Sentence simplification can be viewed as a sequence-tosequence text generation task,where the target sentence is much simpler than the source sentence.The MULTI-STAGE model[15] modified the encoder of the conventional Seq2Seq model by using a multi-stage encoder and has shown its effectiveness in sentence simplification.However,this model has not considered the hierarchical structure information of the sentence,which is of great significance for sentence simplification.Because of this,a further improvement on the MULTI-STAGE model attempts to integrate the ON network into the encoder of the MULTI-STAGE model.We name this model the ON-MULTI-STAGE model.

    Figure 3 shows the overall structure of the proposed model.Given a complex source sentenceX=(x0,x1,…,xi,…,x|L|),wherexidenotesi-th word-embedding vector in the source sentence of length|L|,the proposed model learns to generate its simplified targetY=(y1,y2,…,yj,…,y|l|),whereyjdenotes thej-th word in the target sentence of length|l|.As shown in Figure 3,the encoder works in three stages.The first stage is the N-gram reading stage,where N represents the size of the convolution kernel.In this stage,a convolution operation on the word-embedding vector of the source sentence is used to obtain the convolution word-embedding vector.The second stage is the glance-over stage,which is built from the ONLSTM network.In this stage,the ONLSTM network (the green -coloured part in Figure 3) extracts the hierarchical structural information of the sentence for the final stage of the encoder of the proposed model.The third stage is the final encoding stage.In this stage,the proposed model extracts the final encoding of the source sentence based on the information obtained in the first two stages.

    FIGURE 1 Hierarchical structure tree of the sentence

    FIGURE 2 Neuron structure of the ordered neurons network

    FIGURE 3 ON-MULTI-STAGE model

    3.2.1|Encoder of ON-MULTI-STAGE model

    The first stage of the encoder is the N-gram reading stage.In this stage,the proposed model mainly obtains the convolutional word-embedding vector of the source sentence by the convolution operation.Compared with the conventional wordembedding vector,the convolutional word-embedding vector has N-gram contextual relevance (blue coloured part in Figure 3).The convolutional word-embedding vector matrixEXcan be calculated as follows:

    whereWng,bngare trainable parameters of the convolution kernel,Nis the convolutional kernel size,exiis thei-th convolutional word-embedding vector of theEX,andxiis thei-th word-embedding vector of the source sentence.

    The second stage is the glance-over stage.In this stage,the proposed model uses ONLSTM[16](the green-coloured part in Figure 3)to extract the hierarchical structural information of the source sentence based on information obtained in the Ngram reading stage and the source input.Similar to LSTM,ONLSTM also contains three gate structures.Specifically,these gate structures oft-th time step can be calculated as follows:

    whereWof,Woi,Woo,Uof,Uoi,Uoo,bof,boi,andbooare all trainable parameters;σis sigmoid function;andst‐1is the hidden state of the encoder's glance-over stage at (t-1)th time step.The word-level information vector^ctof the source sentence at thet-th time step is calculated as follows:

    whereWoc,Uoc,andbocare all trainable parameters.Before calculating the neuronsctof ONLSTM at thet-th time step,we first define two level constants,indexiand indexf.;indexirepresents the level of the word processed at thet-th time step(or current time step),whereas indexfrepresents the level of historical information of the sentence at thet-th time step.

    When indexi≥indexf,there is an intersection of the processed word and the historical information of the sentence.Thus,the processed word information should be integrated into the intersection part to update the phrase-level information.At this time,the update method of the neuronsctis

    The dim in Equation (7) is the dimension ofct.The dimension of the neurons in brackets gradually increases from bottom to top.And the smaller the dimension,the lower the level of information recorded by the neurons.(7) shows that,the information of low-level neurons (e.g.,word-level neurons in Figure 2.)is updated with the information of the word-level(e.g.,^ct)at the current time step.The information of high-level neurons (e.g.,sentence-level neurons in Figure 2.) is updated with the sentence-level information (e.g.,ct-1),and the intersection part is updated with the integration of the word-level information with the sentence-level information.

    When indexi<indexf,there is no intersection of the processed word and the historical information of the sentence.Therefore,the intersection does not need to be written,but the original initial state(zeros state)is kept.At this time,the update method of the neuronctis

    As in (7),the dimension of the neurons in brackets gradually increases from bottom to top in Equation (8).And the smaller the dimension number is,the lower the level of information recorded by the neurons.However,unlike (7),the intersection part in Equation (8) maintains the original state.

    Now let us seek the solution ofct.First,we define 1indexas the one-hot vector whoseindex-th element is 1,and let indexicorrespond to 1indexiand indexfto 1indexj;then neuronctcan be calculated as follows:

    wherezrepresents the independent variable of the cumsum function.The value of the master forget gate~fis monotonically increasing from 0 to 1,and the value of the master input gate~iis monotonically decreasing from 1 to 0.The two master gates serve as high-level control for the update operations ofct.

    In Equations(11)and(12),wotrepresents the intersection part,~i-wotrepresents the low-level part,and~f-wotrepresents the high-level part.Thus,Equation(12)can represent the combined result of Equations (7) and (8).

    However,1indexfand 1indexiare discrete variables,and it is not trivial [30] to compute gradients when a discrete variable occurs in the computation graph.Hence,in practice,1indexfand 1indexican be obtained by the softmax function (also known as the normalized exponential function,which is an extension of the binary classification function of sigmoid to multiple classifications):

    whereare all trainable parameters.After solvingct,thet-th time step hidden state of the encoder's glance-over stagestcan be calculated as

    The last stage of the encoder is the final encoding stage.In this stage,the proposed model uses a bidirectional LSTM to transform all input into the ultimate final representation statesHF=,which can be calculated as follows:

    whereis thet-th time step forward propagation hidden state,andis thet-th time step backwards propagation hidden state.We concatenate them as thet-th time step final representation stateof the final encoding stage.The superscriptFdenotes hidden states of the final encoding stage's encoder.

    3.2.2|Attention mechanism

    For the model to generate attention distribution,the hierarchical structural information of the sentence can also be considered.A weak attention connection method is applied in the glance-over stage.The attention weightsαtiof the proposed model can be calculated as

    wherevT,Wos1,Wos2,andWos3are all trainable parameters.β1,β2are two hyperparameters andβ1+β2=1,whereβ1is close to the upper bound of 1,β2is close to the lower bound of 0.Asβ2is the attention connection parameter corresponding to the glance-over stage and close to the lower bound of 0,this attention connection is weak.It is worth noting thatis thei-th vector of theHF,anddtis thet-th time step hidden state of the decoder.The network structure of the decoder is the LSTM.Thus,dtcan be calculated as follows:

    After calculating attention weights,the context vectorcttof the proposed model in thet-th time step can be calculated as

    3.2.3|Output of model

    To allow the proposed model to generate important words(such as person name or place name)not in the vocabulary but in the source sentence,we incorporate the model with a pointer-copy mechanism [27].The proposed model then generates thet-th time step final outputPfinal(yt)by combining the output of the decoderPdecoder(yt)and the attention dis‐tribution Pattn(yt):

    wherepgenis the pointer -copy probability and can be calculated as

    whereWgen,Ugen,Vgen,andbgenare all trainable parameters.The outputPdecoder(yt) of the decoder in (24) is calculated as

    whereWdecode,Udecode,andVdecodeare all trainable parameters.TheattentiondistributionPattn(yt)in(24)canbeobtainedby(20).

    The objective function L(θ) of the proposed model is the negative log-likelihood:

    whereKis the batch size,bsdenotes thebs-th batch,andXis the source sentence;{θ} is the set of all trainable parameters determined through the training and is not required for presetting.

    4|EVALUATION SETUP

    4.1|Data sets

    Two benchmark simplification data sets ofNewselaandWikiLargeare used for evaluating the proposed model.

    Newselais an artificial data set proposed by Xu et al.[17].This data set is constructed by professional news editors based on 1130 news articles and written at four different reading levels (level 4 is the most simplified,and level 0 is the most complex) to meet children's reading standards in different grades.Following Zhang and Lapata's approach [13],we removed adjacent level sentence pairs(such as 0-1,one to two,and 2-3),which are too similar to each other.After this filtering,we are left with 94,208 sentence pairs for training,1129 for validation,and 1077 for the test;WikiLargeis a non-artificial data set constructed by Zhang and Lapata [13].It is a large English Wikipedia corpus.It consists of 296,402 complex-simple sentence pairs for training.As for validation and test,following Zhang and Lapata,we use the validation and test sets created in the work of Xu et al.[23].

    4.2|Evaluation metrics

    Following previous works [13-15],the proposed model is evaluated against the standard evaluation metrics system output against references and against the input sentence(SARI) [23].The SARI metrics compare the model's output with both the reference and the input sentence.

    4.3|Benchmark models

    The comparisons are made with the HYBRID model proposed by Narayan et al.[21],DRESS and DRESS-LS models by Zhang et al.[13],NESLSTM model by Vu et al.[14],and MULTI-STAGE model by Zhang et al.[15].Among them,the HYBRID is a non-neural network benchmark model,and the others are neural network benchmark models.

    4.4|Training details

    The proposed model is implemented based on OpenNMT[31],an open-source toolkit,and all trainable parameters are uniformly initialized within the range of [-0.1,0.1].Adagrad[32] algorithm is used to optimize the proposed model with learning rate 0.15 and an initial accumulator value of 0.1.Batch size is 64.In order to avoid gradient explosion,a maximum gradient norm of two is applied.The best performing parameters of our model are set or found as follows:both glance-over's encoder and decoder have 512 hidden neurons;the final encoding stage's encoder has 256 hidden neurons;The window size N of the N-gram reading stage is three;hyperparameterβ1andβ2are tuned on validation set and found that the best value forβ1is 0.9 and forβ2is 0.1.To limit vocabulary size,we included only the 50,000 most frequent words in the vocabulary and the other words were replaced with <UNK>token which is proposed by Jean et al.[33].

    5|EXPERIMENTS

    In this section,we first compared the proposed model with benchmark models.And then,we conducted exploration experiments inside the model.Finally,we compared the outputs of the proposed model with the best of the relevant benchmark models.Table 1 shows the experiment results.Because Newsela contains high-quality simplifications created by professional editors,the Newsela result is discussed first.

    5.1|Experimental model comparisons

    In this experiment,the proposed model is compared with other benchmark models on Newsela and WikiLarge.As shown in Table 1,the proposed model achieved the highest SARI of 31.49,which is better than all benchmark models,and outperforms the best neural model (MULTI-STAGE) by 1.28 in SARI score,an increase (improvement) of 4.24%.It also outperforms the best non-neural model(HYBRID)by 1.49,an increase of 4.97%.

    On the non-artificial WikiLarge data set,the proposed model achieves its highest SARI,38.22.Specifically,it outperforms the best neural model (MULTI-STAGE) by 0.1 in SARI score,an increase of 0.26%,and outperforms the best non-neural model(HYBRID)by 6.82,a significant increase of 21.72%.This indicates that the model that considers the hierarchical information of the sentence is effective and performs well in sentence simplification tasks.

    5.2|Exploring experiments within the proposed model

    In this subsection,we conduct exploration experiments within the proposed model because WikiLarge comprises automatically aligned sentence pairs where errors unavoidably exist,and the uniform writing style of WikiLarge data set may cause models to generalize poorly [17].In contrast,Newsela contains high-quality simplifications created by professional editors.Thus,to conduct the exploration experiments more efficiently,the experiments are mainly carried out on Newsela.

    5.2.1|Effect of window size in N-gram reading stage

    To investigate the effect of the window size N in the N-gram reading stage,we tried different settings of N as{1,2,3,4,5}and conducted the experiments on the Newsela.The effect of window size N on the simplification task is measured by SARI.The experimental results are shown in Figure 4 and clearly show thatN=3 yields the best performance.N=4 can reach comparable performance,but the larger N has lower performance than that ofN=3,and all do better thanN=1.This result implies that it is better for the proposed model to represent a word by using the other words around it,and it can help the glance-over stage to better extract the hierarchical structural information of sentences.

    5.2.2|Effect of weak attention

    To evaluate the impact of the scale parametersβ1andβ2in Equation (21),we explored different setting sets of {(β1=1,β2=0),(β1=0.95,β2=0.05),(β1=0.9,β2=0.1),(β1=0.8,β2=0.2),(β1=0.7,β2=0.3)}and conducted the experiment on the Newsela.SARI measures the effect of the scale parameters on the simplification task.The experiment result is shown in Figure 5.It clearly shows that the set of (β1=0.95,β2=0.05)performs best.It suggests that(1) scale parametersβ1andβ2are better to set around 0.95:0.05;(2) the weakconnection approach can help the decoder of the proposed model using the hierarchical structural information of the sentence,and therefore can improve the model performance to some extent.

    TABLE 1 Comparison of output examples on test set

    FIGURE 4 Effect of different window size N in N-gram reading stage on simplification task

    FIGURE 5 Effect of different scale parameters of β1 and β2 in weak attention method on simplification task

    5.2.3|Dimension of ordered Neurons

    To analyse the impact of the dimension of ONs in the glance-over stage on sentence simplification task.We explored different dimension setting sets of dim {64,128,192,256,384,512} and conducted the experiment on the Newsela.The results are also measured by SARI and shown in Figure 6.The SARI score rises with the increase in the dimension of ONs.Meanwhile,when the dimension is higher than 384,the improvement trend becomes slow.It means that (1) The increase in the dimension dim of ONs can improve the effect of model sentence simplification tasks;(2)If there is a need to save memory space as much as possible without reducing the effect of sentence simplification tasks too much,the dimension dim of ONs can bet set between 256 and 384,and the relatively good results can be achieved as well.

    5.3|Model output example analyses

    FIGURE 6 Effect of ordered neurons dimensions' on simplification task

    Because the proposed model is improved based on the MULTI-STAGE model,to demonstrate the improvement effect of the model more intuitively,this section compares serval output examples of the two models on the test set in Table 2.

    As shown in Table 2,in the first example,theON‐MULTI‐STAGEmodel can replace the word ‘assist’ with‘help’.In the second example,theON‐MULTI‐STAGEmodel can further delete unnecessary words in the sentence.In the last example,theON‐MULTI‐STAGEmodel can use the simple word ‘study’ to replace most of the content of the input sentence while the original meaning is maintained.As can be seen from these examples,theON‐MULTI‐STAGEmodel improved the MULTI-STAGE model further.However,in the first example,it can be seen that the model still has some inadequacy in maintaining sentence grammatical accuracy.For example,after replacing the word ‘assist’ with ‘help’,the word ‘an’ is not replaced with ‘a’.

    6|CONCLUSIONS

    We proposed anON‐MULTI‐STAGEmodel based on theMULTI‐STAGEmodel.Differing from the conventional sequence-to-sequence model,theON‐MULTI‐STAGEmodel can catch the sentence hierarchical structural information through the neuron-ordered information.To the best of our knowledge,the ON-MULTI-STAGE model is the first Seq2Seq model to introduce the ON network and the first model to use the ON network for sentence simplification.To verify the effectiveness of the proposed model,we evaluated it on two benchmark simplification data sets.The results of the experiments showed that the proposed model outperforms all benchmark models and can significantly reduce the complexity of the input sentence while preserving the meaning of the source sentence.Future work may need to explore the following aspects:(1) consideration of using multi-task learning method to improve the grammatical accuracy of the model in generating target sentences;(2)improvement in the decoder's generating capability by utilizing multi-stage decoding.

    TABLE 2 Comparison of output examples on test set

    ACKNOWLEDGEMENTS

    This work was supported in part by Department of Education of Guangdong Province under Special Innovation Program(Natural Science),grant number 2015KTSCX183 and in part by South China University of Technology under‘Development Fund’ with fund number x2js-F8150310.

    黄色视频在线播放观看不卡| 日产精品乱码卡一卡2卡三| 欧美少妇被猛烈插入视频| 成年av动漫网址| 精品久久久久久久久av| 国产成人freesex在线| 人体艺术视频欧美日本| 人人妻人人澡人人看| 国产精品免费大片| 国产精品久久久久久精品电影小说| 少妇熟女欧美另类| 久久婷婷青草| 两个人的视频大全免费| 爱豆传媒免费全集在线观看| 人人妻人人添人人爽欧美一区卜| 久久久久国产精品人妻一区二区| 秋霞伦理黄片| 母亲3免费完整高清在线观看 | 老女人水多毛片| 国产毛片在线视频| kizo精华| 熟女人妻精品中文字幕| 黑人巨大精品欧美一区二区蜜桃 | av有码第一页| 午夜视频国产福利| 丁香六月天网| 亚洲精品乱码久久久v下载方式| 亚洲国产欧美日韩在线播放| 男女无遮挡免费网站观看| 欧美三级亚洲精品| 69精品国产乱码久久久| 人成视频在线观看免费观看| 精品一区二区三区视频在线| 国产亚洲精品久久久com| 久久国产亚洲av麻豆专区| 色视频在线一区二区三区| av卡一久久| 丰满饥渴人妻一区二区三| 日韩在线高清观看一区二区三区| 美女大奶头黄色视频| 亚州av有码| 婷婷色综合www| 精品亚洲成a人片在线观看| 久久国产精品大桥未久av| 草草在线视频免费看| 国产精品久久久久久av不卡| 久久精品国产鲁丝片午夜精品| 国产白丝娇喘喷水9色精品| 少妇被粗大猛烈的视频| 激情五月婷婷亚洲| 99九九在线精品视频| 如何舔出高潮| 美女中出高潮动态图| 欧美xxⅹ黑人| 亚洲欧洲精品一区二区精品久久久 | 亚洲无线观看免费| 大香蕉久久成人网| 久久人人爽人人爽人人片va| 在线观看美女被高潮喷水网站| 欧美日韩在线观看h| 日韩av在线免费看完整版不卡| 亚洲精品自拍成人| 蜜臀久久99精品久久宅男| 色网站视频免费| 亚洲激情五月婷婷啪啪| 欧美日韩一区二区视频在线观看视频在线| 中文天堂在线官网| 日韩伦理黄色片| 亚洲内射少妇av| 久久久久人妻精品一区果冻| av国产久精品久网站免费入址| 午夜激情久久久久久久| 久久99蜜桃精品久久| 69精品国产乱码久久久| 一级a做视频免费观看| 午夜老司机福利剧场| 一二三四中文在线观看免费高清| 婷婷色综合www| 久久久精品区二区三区| 午夜福利视频在线观看免费| av在线app专区| 高清视频免费观看一区二区| 少妇高潮的动态图| 精品国产露脸久久av麻豆| 美女脱内裤让男人舔精品视频| 九九在线视频观看精品| 母亲3免费完整高清在线观看 | 女人久久www免费人成看片| 亚洲欧洲国产日韩| 国产片特级美女逼逼视频| 夜夜爽夜夜爽视频| 伊人亚洲综合成人网| 97在线视频观看| 欧美日韩av久久| 赤兔流量卡办理| 亚洲精品视频女| 天堂8中文在线网| 婷婷色av中文字幕| 夫妻性生交免费视频一级片| 国产精品人妻久久久影院| 美女国产高潮福利片在线看| 国产成人一区二区在线| 亚洲伊人久久精品综合| 国产精品99久久99久久久不卡 | 嫩草影院入口| 国产淫语在线视频| 22中文网久久字幕| 亚洲欧美日韩卡通动漫| 青春草视频在线免费观看| 三级国产精品欧美在线观看| 国产综合精华液| 久久久久久久大尺度免费视频| 看十八女毛片水多多多| 九色成人免费人妻av| 亚洲欧美日韩另类电影网站| 国产成人freesex在线| 国产精品秋霞免费鲁丝片| 最后的刺客免费高清国语| 一级爰片在线观看| 亚洲精品久久久久久婷婷小说| 天堂8中文在线网| 自线自在国产av| 日本免费在线观看一区| 成人影院久久| 高清毛片免费看| 九九在线视频观看精品| 亚洲国产精品一区三区| av黄色大香蕉| 22中文网久久字幕| 国产成人a∨麻豆精品| 精品亚洲成国产av| 欧美亚洲 丝袜 人妻 在线| 久久99热这里只频精品6学生| 九九爱精品视频在线观看| 国产精品久久久久久久电影| 中国国产av一级| 18禁观看日本| 久久久久久久久久久久大奶| 青青草视频在线视频观看| 国产深夜福利视频在线观看| 国产精品 国内视频| 人人妻人人添人人爽欧美一区卜| 欧美一级a爱片免费观看看| 这个男人来自地球电影免费观看 | 亚洲国产精品一区二区三区在线| 色视频在线一区二区三区| 久久精品国产鲁丝片午夜精品| 国产男女超爽视频在线观看| 汤姆久久久久久久影院中文字幕| 午夜av观看不卡| 有码 亚洲区| 久久韩国三级中文字幕| 久久久久网色| 婷婷色av中文字幕| 80岁老熟妇乱子伦牲交| 免费高清在线观看日韩| 婷婷色综合大香蕉| 国产老妇伦熟女老妇高清| 亚洲精华国产精华液的使用体验| 亚洲欧美成人综合另类久久久| 亚洲四区av| 国产精品人妻久久久久久| 欧美bdsm另类| 国产成人aa在线观看| 国产精品99久久久久久久久| 777米奇影视久久| 韩国av在线不卡| 国产精品一区二区三区四区免费观看| 97在线视频观看| 日本黄色片子视频| 在线亚洲精品国产二区图片欧美 | 国产日韩欧美视频二区| 午夜老司机福利剧场| 久久久精品94久久精品| 久久影院123| 一本大道久久a久久精品| 蜜桃久久精品国产亚洲av| 午夜福利,免费看| 亚洲精品aⅴ在线观看| 欧美日韩视频高清一区二区三区二| 国产乱人偷精品视频| 91久久精品电影网| 美女内射精品一级片tv| 久热这里只有精品99| 欧美人与性动交α欧美精品济南到 | 免费观看在线日韩| 日本爱情动作片www.在线观看| 精品亚洲成国产av| 婷婷色综合大香蕉| 99热6这里只有精品| 久久久国产精品麻豆| 日韩av免费高清视频| 亚洲精品456在线播放app| 你懂的网址亚洲精品在线观看| 制服人妻中文乱码| 欧美精品高潮呻吟av久久| 欧美日韩成人在线一区二区| 亚洲综合精品二区| 国产在线视频一区二区| 色哟哟·www| 国产欧美日韩一区二区三区在线 | 国产无遮挡羞羞视频在线观看| 大陆偷拍与自拍| 激情五月婷婷亚洲| 国产色爽女视频免费观看| av在线观看视频网站免费| 性色avwww在线观看| 欧美激情 高清一区二区三区| 亚洲国产精品专区欧美| 国产一级毛片在线| 亚洲av日韩在线播放| 校园人妻丝袜中文字幕| 两个人免费观看高清视频| 99热国产这里只有精品6| 美女国产高潮福利片在线看| 大陆偷拍与自拍| 亚洲国产精品一区三区| 日韩一区二区视频免费看| 最新中文字幕久久久久| 国产视频内射| 中文天堂在线官网| 日韩欧美精品免费久久| www.av在线官网国产| 欧美国产精品一级二级三级| 亚洲av成人精品一区久久| 国产精品无大码| 春色校园在线视频观看| 亚洲精品美女久久av网站| 国产精品免费大片| 一级爰片在线观看| 亚洲av男天堂| 免费看av在线观看网站| 一级,二级,三级黄色视频| av国产久精品久网站免费入址| 尾随美女入室| 91久久精品国产一区二区三区| 亚洲少妇的诱惑av| 中文欧美无线码| 亚洲精品日本国产第一区| 国产欧美亚洲国产| 97在线视频观看| 在线观看人妻少妇| 国产一区二区在线观看av| 久久久精品区二区三区| 亚洲av日韩在线播放| 少妇猛男粗大的猛烈进出视频| 热99久久久久精品小说推荐| 大片电影免费在线观看免费| 精品亚洲成国产av| 亚洲伊人久久精品综合| 如何舔出高潮| 如日韩欧美国产精品一区二区三区 | 国产在线一区二区三区精| 在线观看免费日韩欧美大片 | av播播在线观看一区| 老司机影院成人| 性高湖久久久久久久久免费观看| 日本av手机在线免费观看| 一区二区日韩欧美中文字幕 | 狂野欧美激情性xxxx在线观看| 欧美3d第一页| 精品一区二区三卡| 午夜福利影视在线免费观看| 国产乱来视频区| 免费黄频网站在线观看国产| 男女免费视频国产| 久久精品国产自在天天线| 高清不卡的av网站| 青春草视频在线免费观看| 国产一区二区在线观看av| 婷婷色av中文字幕| 国产黄频视频在线观看| 国产精品久久久久久精品电影小说| 成人亚洲欧美一区二区av| 精品少妇久久久久久888优播| 午夜免费观看性视频| 日韩精品有码人妻一区| 在现免费观看毛片| 青春草视频在线免费观看| 成人毛片60女人毛片免费| 国产一区亚洲一区在线观看| 国产乱来视频区| 国产高清不卡午夜福利| 国产一区二区在线观看日韩| 成人18禁高潮啪啪吃奶动态图 | 久久久久国产精品人妻一区二区| 精品亚洲成a人片在线观看| 国产精品 国内视频| av国产精品久久久久影院| 国产精品久久久久久精品古装| 欧美+日韩+精品| 亚洲精品乱码久久久久久按摩| 午夜激情久久久久久久| 亚洲av不卡在线观看| 美女大奶头黄色视频| 国产成人av激情在线播放 | 免费大片黄手机在线观看| 日本av免费视频播放| 欧美国产精品一级二级三级| 欧美日韩av久久| 国产一区二区三区综合在线观看 | 亚洲国产欧美在线一区| 99热6这里只有精品| 波野结衣二区三区在线| 在线观看三级黄色| 一级黄片播放器| 熟女av电影| 亚洲精品视频女| av国产精品久久久久影院| 免费黄频网站在线观看国产| 日韩av不卡免费在线播放| 人人妻人人添人人爽欧美一区卜| 97在线人人人人妻| 欧美日韩精品成人综合77777| 国产精品一二三区在线看| 成人毛片a级毛片在线播放| 国产亚洲精品久久久com| 欧美日韩视频精品一区| 最近2019中文字幕mv第一页| 天天躁夜夜躁狠狠久久av| 中文字幕精品免费在线观看视频 | 亚洲综合色惰| 精品亚洲乱码少妇综合久久| 精品一区二区免费观看| 亚洲国产精品国产精品| 亚洲国产av影院在线观看| 特大巨黑吊av在线直播| 国产欧美日韩综合在线一区二区| 纯流量卡能插随身wifi吗| 亚洲天堂av无毛| 妹子高潮喷水视频| 男人操女人黄网站| 国产精品久久久久久久久免| 五月伊人婷婷丁香| 中文字幕亚洲精品专区| 亚洲欧洲国产日韩| 久久精品国产亚洲av天美| av免费观看日本| 亚洲精品中文字幕在线视频| 3wmmmm亚洲av在线观看| 久久久久久人妻| 99热国产这里只有精品6| 如何舔出高潮| 亚洲欧美成人综合另类久久久| 狠狠精品人妻久久久久久综合| 日本av免费视频播放| 天堂俺去俺来也www色官网| 交换朋友夫妻互换小说| 最近2019中文字幕mv第一页| 免费高清在线观看视频在线观看| 亚洲成人一二三区av| 超色免费av| 秋霞在线观看毛片| 91aial.com中文字幕在线观看| 国产日韩欧美视频二区| 制服诱惑二区| 女的被弄到高潮叫床怎么办| 日韩伦理黄色片| 中文字幕免费在线视频6| 观看av在线不卡| 国产精品蜜桃在线观看| 最新中文字幕久久久久| 国产乱人偷精品视频| 精品一区二区免费观看| 国产精品麻豆人妻色哟哟久久| 女人精品久久久久毛片| 夜夜爽夜夜爽视频| 久久久精品94久久精品| 国产毛片在线视频| 国精品久久久久久国模美| 久久鲁丝午夜福利片| 一区二区三区四区激情视频| 看免费成人av毛片| 好男人视频免费观看在线| 岛国毛片在线播放| 亚洲国产最新在线播放| 成年美女黄网站色视频大全免费 | 国产精品成人在线| 亚洲精品456在线播放app| 一区在线观看完整版| 晚上一个人看的免费电影| av福利片在线| 精品亚洲乱码少妇综合久久| 精品午夜福利在线看| 欧美97在线视频| 高清av免费在线| 日韩电影二区| 激情五月婷婷亚洲| 91精品三级在线观看| 免费观看性生交大片5| 国产在线免费精品| 一区二区三区乱码不卡18| 欧美三级亚洲精品| 亚洲欧美一区二区三区国产| 免费人妻精品一区二区三区视频| 特大巨黑吊av在线直播| 日本vs欧美在线观看视频| 精品亚洲成国产av| 精品久久久久久久久av| 日日摸夜夜添夜夜添av毛片| 国产成人av激情在线播放 | 青春草亚洲视频在线观看| 欧美人与善性xxx| 亚洲精品久久午夜乱码| 国产男女超爽视频在线观看| 国产亚洲精品久久久com| 欧美日韩一区二区视频在线观看视频在线| 国产日韩一区二区三区精品不卡 | 久久久久久久久大av| 我要看黄色一级片免费的| 伊人久久精品亚洲午夜| 一边亲一边摸免费视频| 欧美日韩亚洲高清精品| av有码第一页| 成人漫画全彩无遮挡| videos熟女内射| 日本色播在线视频| 狂野欧美白嫩少妇大欣赏| 全区人妻精品视频| 日本爱情动作片www.在线观看| 国产精品国产av在线观看| 免费不卡的大黄色大毛片视频在线观看| 伊人久久国产一区二区| 亚洲一级一片aⅴ在线观看| 欧美一级a爱片免费观看看| 欧美国产精品一级二级三级| 久久精品国产亚洲av涩爱| 国产精品久久久久久av不卡| 亚洲欧美一区二区三区黑人 | 精品少妇久久久久久888优播| av有码第一页| 亚洲av免费高清在线观看| 日韩人妻高清精品专区| 久久99热6这里只有精品| av福利片在线| 亚洲欧美色中文字幕在线| 少妇的逼好多水| a级片在线免费高清观看视频| 国产成人av激情在线播放 | 色视频在线一区二区三区| 五月开心婷婷网| videossex国产| 满18在线观看网站| 国产精品国产三级国产专区5o| 亚洲天堂av无毛| 久久精品国产鲁丝片午夜精品| 亚洲国产欧美在线一区| 尾随美女入室| 王馨瑶露胸无遮挡在线观看| 永久免费av网站大全| a级毛色黄片| 视频区图区小说| 亚洲欧美色中文字幕在线| 一级毛片电影观看| 国产在线免费精品| 美女中出高潮动态图| 欧美精品一区二区大全| 在线观看美女被高潮喷水网站| 欧美老熟妇乱子伦牲交| 一区二区三区四区激情视频| 肉色欧美久久久久久久蜜桃| 97精品久久久久久久久久精品| 成人黄色视频免费在线看| 高清毛片免费看| 亚洲综合色网址| 午夜免费男女啪啪视频观看| av在线播放精品| 国产有黄有色有爽视频| 日韩 亚洲 欧美在线| 国产午夜精品久久久久久一区二区三区| 久久久久久久国产电影| 亚州av有码| 亚洲精华国产精华液的使用体验| 热re99久久国产66热| 人妻夜夜爽99麻豆av| 久久99热6这里只有精品| 亚洲美女视频黄频| 99久久精品一区二区三区| 日韩成人av中文字幕在线观看| 99精国产麻豆久久婷婷| 久久久久国产精品人妻一区二区| 久久久久精品久久久久真实原创| 中文字幕久久专区| 色视频在线一区二区三区| 最近中文字幕2019免费版| 国产 精品1| 欧美日韩视频精品一区| 国产精品麻豆人妻色哟哟久久| 国产视频首页在线观看| 免费av不卡在线播放| 亚洲国产精品成人久久小说| 国产熟女午夜一区二区三区 | 精品一区在线观看国产| 大香蕉久久网| 男女免费视频国产| 啦啦啦啦在线视频资源| 99久久中文字幕三级久久日本| 大又大粗又爽又黄少妇毛片口| 国产日韩一区二区三区精品不卡 | 久久99热这里只频精品6学生| 全区人妻精品视频| 韩国高清视频一区二区三区| 女人精品久久久久毛片| 熟女av电影| 午夜91福利影院| 欧美精品人与动牲交sv欧美| 晚上一个人看的免费电影| av又黄又爽大尺度在线免费看| 中国美白少妇内射xxxbb| 国模一区二区三区四区视频| 午夜日本视频在线| 十八禁高潮呻吟视频| 夫妻午夜视频| 黄色一级大片看看| 国产片特级美女逼逼视频| 免费观看a级毛片全部| 国产一区有黄有色的免费视频| 国产高清国产精品国产三级| 最近中文字幕2019免费版| 一边摸一边做爽爽视频免费| 国产一区二区在线观看av| 国产午夜精品一二区理论片| av国产久精品久网站免费入址| 亚洲一区二区三区欧美精品| 日日啪夜夜爽| 欧美另类一区| 国产免费现黄频在线看| 少妇人妻精品综合一区二区| 99热网站在线观看| 91精品一卡2卡3卡4卡| 一级毛片我不卡| 大香蕉久久成人网| 久久久久久久亚洲中文字幕| 国产综合精华液| 欧美xxⅹ黑人| 女的被弄到高潮叫床怎么办| 日本猛色少妇xxxxx猛交久久| av有码第一页| 97精品久久久久久久久久精品| 亚洲怡红院男人天堂| 亚洲第一av免费看| 最近中文字幕高清免费大全6| 国产成人精品无人区| 香蕉精品网在线| 2018国产大陆天天弄谢| 黄色配什么色好看| 国产黄色视频一区二区在线观看| 精品一区二区免费观看| 中国美白少妇内射xxxbb| 夜夜骑夜夜射夜夜干| 欧美精品一区二区免费开放| 日韩视频在线欧美| 在线观看免费高清a一片| 日韩av在线免费看完整版不卡| 久久精品国产鲁丝片午夜精品| 青春草亚洲视频在线观看| 精品熟女少妇av免费看| 国产精品久久久久久精品古装| 啦啦啦在线观看免费高清www| 欧美激情极品国产一区二区三区 | 免费看不卡的av| 国产在视频线精品| 制服诱惑二区| 成年女人在线观看亚洲视频| 丝袜美足系列| 91国产中文字幕| 免费少妇av软件| 国产av国产精品国产| 久久久亚洲精品成人影院| av黄色大香蕉| 黄色怎么调成土黄色| 九九在线视频观看精品| 美女视频免费永久观看网站| 免费久久久久久久精品成人欧美视频 | 午夜影院在线不卡| 亚洲精品av麻豆狂野| a级毛片在线看网站| 久久精品国产鲁丝片午夜精品| 国产精品女同一区二区软件| 国产成人精品无人区| 国产深夜福利视频在线观看| 丰满迷人的少妇在线观看| 欧美三级亚洲精品| 三上悠亚av全集在线观看| 亚洲欧洲精品一区二区精品久久久 | 日本wwww免费看| 亚洲精品一二三| 婷婷色综合www| 美女主播在线视频| 久久久精品免费免费高清| 一本久久精品| 日本色播在线视频| 亚洲人成网站在线观看播放| 美女cb高潮喷水在线观看| 26uuu在线亚洲综合色| 97超碰精品成人国产| av不卡在线播放| 看非洲黑人一级黄片| 亚洲人成网站在线观看播放| 99久久精品国产国产毛片| 国产极品粉嫩免费观看在线 | 一级片'在线观看视频| 欧美最新免费一区二区三区| 成人二区视频| 欧美少妇被猛烈插入视频| 狠狠婷婷综合久久久久久88av| 午夜精品国产一区二区电影| 在线观看国产h片| 国产免费一级a男人的天堂| 精品亚洲乱码少妇综合久久| 日本av手机在线免费观看| 亚洲精品视频女| 日韩视频在线欧美| 欧美日韩国产mv在线观看视频| 观看av在线不卡|