• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Semantic Supervision Method for Abstractive Summarization

    2021-12-10 11:53:26SunqiangHuXiaoyuLiYuDengYuPengBinLinandShanYang
    Computers Materials&Continua 2021年10期

    Sunqiang Hu,Xiaoyu Li,Yu Deng,*,Yu Peng,Bin Lin and Shan Yang

    1School of Information and Software Engineering,University of Electronic Science and Technology of China,Chengdu,610054,China

    2School of Engineering,Sichuan Normal University,Chengdu,610066,China

    3Department of Chemistry,Physics,and Atmospheric Sciences,Jackson State University,Jackson,MS,39217,USA

    Abstract:In recent years,many text summarization models based on pretraining methods have achieved very good results.However,in these text summarization models,semantic deviations are easy to occur between the original input representation and the representation that passed multi-layer encoder,which may result in inconsistencies between the generated summary and the source text content.The Bidirectional Encoder Representations from Transformers(BERT)improves the performance of many tasks in Natural Language Processing(NLP).Although BERT has a strong capability to encode context,it lacks the fine-grained semantic representation.To solve these two problems,we proposed a semantic supervision method based on Capsule Network.Firstly,we extracted the fine-grained semantic representation of the input and encoded result in BERT by Capsule Network.Secondly,we used the fine-grained semantic representation of the input to supervise the fine-grained semantic representation of the encoded result.Then we evaluated our model on a popular Chinese social media dataset(LCSTS),and the result showed that our model achieved higher ROUGE scores(including R-1,R-2),and our model outperformed baseline systems.Finally,we conducted a comparative study on the stability of the model,and the experimental results showed that our model was more stable.

    Keywords:Text summarization;semantic supervision;capsule network

    1 Introduction

    The goal of text summarization is to deliver important information in the source text with a small number of words.In the current era of information explosion,it is an undeniable fact that text information floods the Internet.Hence,it is necessary for us to apply text summarization which can help us obtain useful information from the source text.With the rapid development of artificial intelligence,automatic text summarization was proposed,that is,computers can aid people in complex text summarization.By using machine learning,deep learning,and other methods,we can get a general model for automatic text summarization,which can replace humans to extract summary from source text.

    Automatic text summarization is usually divided into two categories according to the implementation method:extractive summarization and abstractive summarization.And Extractive summarization is to extract some sentences containing key information from the source text and combine them to a summarization,while abstractive summarization is to compress and refine the information of source text to generate a new summarization.Compared with extractive summarization,abstractive summarization is more innovative,because machines can generate summary contents that are more informative and attractive.Abstractive text summarization models are usually based on a sequence-to-sequence model[1].It contains two parts:the encoder and decoder.The encoder encodes the input as a fixed-length context vector which contains important information of the input text,and the decoder decodes the vector into the output we desire.In the early days,we will choose RNN or LSTM[2]as the encoder-decoder structure of seq2seq,and use the last hidden unit of RNN or LSTM as the context vector of the decoder.

    BERT[3]is a pre-trained language model which is trained in advance through a large amount of unsupervised data.With a good capability of contextual semantic representation,BERT has achieved very good performance in many tasks of NLP.However,it is not suitable to complete generative tasks for lack of the decoder structure.Dong et al.[4]proposed a Unified Pre-trained Language Model(UNILM),whose submodule seq2seqLM could complete the task of natural language generation by modifying BERT’s mask matrix.BERT can encode each word accurately according to the context,but it lacks a fine-grained semantic representation of the entire input text,which results in semantic deviations between the result encoded by BERT and the original input text.The traditional seq2seq model does not perform well in text summarization,so we consider using the pre-trained model BERT to improve the actual effect of the text summarization.However,BERT has its flaws mentioned above.Therefore,we hope to overcome the defects by applying some methods and improve the effectiveness of the text summarization model based on BERT.

    Nowadays,Neural Network has been applied to many fields[5,6],and automatic text summarization is one of its hot research.In this paper,according to the idea of seq2seqLM,we modified the mask matrix of BERT and used BERT-base to complete abstractive summarization.To reduce semantic deviations,we introduced a semantic supervision method based on Capsule Network[7]into our model.Following previous work,we evaluated our proposed model on the LCSTS dataset[8],the experimental results showed that our model is superior to the baseline system,and the proposed semantic supervision method can indeed improve the effectiveness of BERT.

    The remainder of this paper is organized as follows.The related work will be discussed in Section 2.The proposed model will be presented in Section 3.Details of the experiment will be explained in Section 4.Comparison and discussion of experimental results will be made in Section 5.Conclusions and Future work will be drawn in Section 6.

    2 Related Works

    2.1 Seq2seq Model

    The research on abstractive summarization mainly depends on the seq2seq model proposed by Cho et al.[1],which solves the length inequality of input and output in generative tasks.The seq2seq model contains two parts:encoder and decoder.The encoder encodes the input into a context vectorC,and the decoder decodes the output byC.The Seq2seq model was originally used for Neural Machine Translation(NMT),and firstly proposed by Rush et al.[9]based on attention mechanism[10]for abstractive summarization,and it proved to have good performance.

    2.2 Pre-Trained Model and BERT

    Pre-trained language model has become an important technology in NLP field in recent years.The main idea is that the model’s parameters are no longer randomly initialized,but trained in advance by some tasks(such as Language Model)and large-scale text corpus.Then they are finetuned on the small dataset of specific tasks,and it makes it easy to tarin a model.The early pretrained Language Model is Embeddings from Language Model(ELMo)[11],which can complete the feature extraction by bidirectional LSTM and fine-tune the downstream tasks.A Generative Pre-Training Language Model(GPT)can achieve very good performance by replacing LSTM with Transformer[12]in the text generation task.Based on GPT,Devlin et al.[3]considered using bidirectional Transformer and higher quality large-scale dataset for pre-training and obtained a better pre-trained language model BERT.

    Liu et al.[13]proposed BERTSum for extractive summarization,a simple variant of BERT,and the model outperformed baseline on the CNN/DailyMail dataset.Later,Liu et al.[13]joined the decoder structure based on BERTSum to complete abstractive summarization and conducted experiments on the previous dataset.The experimental results showed that their model was superior to the previous model in both extraction summarization and abstractive summarization.The goal of UNILM proposed by Li et al.[4]is to adapt BERT to generative tasks,which is the same as that of Masked Sequence to Sequence Pre-training Model(MASS)proposed by Song et al.[14].But UNILM is more succinct,sticking to BERT’s idea and only using encoders to complete various NLP tasks.The UNILM is trained based on three objectives:Unidirectional LM(left-to-right and right-to-left),Bidirectional LM,and seq2seqLM.Seq2seqLM can complete abstractive summarization.It defines the source text as the first sentence and the corresponding summary as the second sentence.The first sentence is encoded by Bidirectional LM,and the second sentence is encoded by Unidirectional LM(left-to-right).

    2.3 Semantic Supervision and Capsule Network

    Ma et al.[15]proposed a method to improve semantic relevance in seq2seq model.By calculating the cosine similarity between the semantic vector of the source text and the summary,we can get the measure of semantic relevance between them.The larger the cosine value is,the more relevant they are,and the negative value of the cosine similarity is added to the loss function to maximize the semantic relevance between them.At the same time,Ma et al.[16]also proposed an autoencoder as an assistant supervisor method to improve the text representation.By minimizing the L2 distance between the summary encoder vector and the source text encoder vector,we can supervise the semantic representation of the source text and improve the semantic representation of the source text.

    In 2017,Sabour et al.[7]proposed a new neural network structure called Capsule Network.The input and output of Capsule Network are all in the form of vectors,and the results of image classification experiments showed that Capsule Network has a strong ability of feature aggregation.Zhao et al.[17]proposed a model based on Capsule Network to do text classification.As a result,the model performed better than the baseline system in the experiment.

    Based on the methods mentioned above,we complete abstractive summarization by adopting the idea of seq2seqLM,and added the semantic supervision method into the model.We conducted relevant experiments on the Chinese dataset LCSTS[8],and analyzed the experimental results.

    3 Proposed Model

    3.1 BERT for Abstractive Summarization

    Our model structure is shown in Fig.1,and it is composed of four parts.Embedding Layer is responsible for transforming the input token into a vector representation.Transformer Layer is responsible for encoding the token vector representation according to the context information.Output Layer is used to parse the encoded result of Transformer Layer.And the last part is the Semantic Supervision module proposed by us,which is responsible for supervising the semantic encoding of Transformer Layer.

    Embedding Layer

    BERT’s embedding layer contains Token Embedding,Segment Embedding and Position Embedding.Token Embedding is the vector representation of tokens,which is obtained by looking up the embedding matrix with token Id.Segment Embedding is used to express whether the current token comes from the first segment or the second segment.Position Embedding is the position vector of the current token.Fig.1 shows Embedding Layer of BERT.The input representation follows that of BERT.We added a special token([CLS])at the beginning of input,and added a special token([SEP])at the end of every segment.T={T1,T2,...,Tn} represents the token sequence of the source text,andS={S1,S2,...,Sn} represents the token sequence of the summary.We got the inputX={[CLS],T1,T2,...,Tn,[SEP],S1,...,Sm,[SEP]} of the model by splicingT,Sand special token.By summing corresponding Token Embedding,Position Embedding and Segment Embedding,we can get a vector representation of each input token.

    Transformer Layer

    Transformer Layer consists ofNTransformer Blocks which share the same structure but have different parameters to be trained.Transformer was originally proposed by Vaswani et al.[12],but only the Encoder part of Transformer is used in BERT.The reason why BERT can perform well in many NLP tasks is that it depends on a large amount of unsupervised data and the excellent semantic encoding capability of Transformer.

    The input of seq2seqLM is the same as that of BERT,but the main difference is that seq2seqLM changes the mask matrix of multi-head attention in Transformer.As shown on the left of Fig.2,the source text’s tokens can attend to each other from both directions(left-to-right and right-to-left),while every token of the summary can only attend to its left context(including itself)and all tokens in the source text.The mask matrix is designed as follows[4]:

    The element of the mask matrix is 0,which means theith token can attend to thejth token.In contrast,the element is ?∞,which means theith token can’t attend to thejth token.On the right of Fig.2,we showed the self-attention mask matrixMin Eq.(1),which is designed for the text summarization.The left part ofMis set 0 so that all tokens can attend to the source text token.Our goal is to predict the summary,and attention from the source text to the summary is unnecessary,we set the upper right elements ?∞.On the bottom right side,we set its lower triangular matrix elements 0,and other elements ?∞,which prevents the current tokens of the summary from paying attention to the tokens after it.

    Figure 1:An overview of our model.Its main body(on the left)is composed of embedding layer,transformer layer,and output layer.Based on the main body,it contains a semantic supervision module(on the right)

    Figure 2:The overview of self-attention mask matrix

    The output of Embedding Layer is defined asT0= {X1,X2,...,Xn},whereXirepresents the vector representation of theith token andnrepresents the length of the input sequence.We abbreviated the output of thelth Transformer block as:Tl=Transformerl(Tl?1).In each Transformer Block,by aggregating multiple self-attention heads,we can get the output of the current multi-head attention.For thelth Transformer block,the outputAlof the multi-head attention is computed as follows:

    Output Layer

    We took the output of the last Transformer Block as the input of Output Layer.Output Layer consists of three parts:two full connection layers and one Layer Normalization.

    The first full connection layer is used to add nonlinear operations to BERT’s output,and we use GELU as the activation function,which is widely used in BERT.In Eq.(3),TNis the output of the last Transformer Block,W1is the matrix to be trained,b1is the value of bias,andO1is the output of the first full connection layer.

    Different from Batch Normalization[18],Layer Normalization[19]does not depend on batch size and the length of the input sequence.Adding Layer Normalization can avoid gradient disappearance.In Eq.(4),LN(?)is Layer Normalization andO2is the output ofLN(?).

    The second full connection layer is used to parse the output,which containsn×I(nis the length of output andIis the size of vocabulary)units,and we use softmax as the activation function.The softmax function is commonly used in multi-classification,and it map the output of multiple neurons to the interval(0,1).Predicting a word is equivalent to a multi-classification task.In Eq.(5),W3is the matrix to be trained,b3is the value of bias,andO3is the final output of our model.

    3.2 Semantic Supervision Based on Capsule Network

    For lack of fine-grained semantic representation in BERT,it can’t produce high-quality summaries when it was applied to text summarization.And there are semantic deviations between the original input and the encoded result passed multi-layer encoder.We hope to improve these problems by adding semantic supervision based on Capsule Network.The implementation of semantic supervision is shown on the right side of Fig.1.At the training stage,we took the result of Token Embedding as the input of Capsule Network and got the semantic representationViof the input.At the same time,we did the same operation for the output of the last Transformer Block to get the semantic representationVoof the output.We implemented the semantic supervision by minimizing the distanced(Vi,Vo)between the semantic representationViandVo.d(Vi,Vo)is calculated as Eq.(6).

    Ma et al.[15]directly took the input and output results of the model as semantic representations,which had low generalization capability.So we added a Capsule Network[7]which is capable of high-level feature clustering so as to extract semantic features.The Capsule Network uses vectors as input and output,and vector has a good representational capability,such as using vectors to represent words in word2vec.Of course,our experiment also showed that Capsule Network performed better than LSTM[2]and GRU[20].We define a set of input vectorsu= {u1,u2,...,un},and the output of Capsule Network isv= {v1,v2,...,vn}.The output of Capsule Network is calculated as follows:

    The loss function of Semantic Supervision can be written as follows:

    3.3 Loss Function and Training

    There are two loss functions in our model that need to be optimized.The first one is the categorical cross-entropy loss in Eq.(16),whereNis the number of all samples,y∈Rnis the true label of the input sample,is the corresponding prediction label,Dis the sample set,nis the length of summary andmis the vocabulary size.The other one is the semantic supervision loss defined in Eq.(15).Our objective is to minimize the loss function in Eq.(17).

    During training,we used Adam optimizer[21]with the setting:learning rateα=1×10?5,two momentum parametersβ1=0.9,β2=0.999 andε=1×10?8.

    4 Experiments

    In this section,we will introduce our experiments in detail,including dataset,evaluation metric,experiment setting and baseline systems.

    Table 1:Statistics of different datasets of LCSTS

    4.1 Dataset

    We conducted experiments on LCSTS dataset[8]to evaluate the proposed method.LCSTS is a large-scale Chinese short text summarization dataset collected from Sina Weibo,which is a famous social media website in China.As shown in Tab.1,it consists of more than 2.4 million pairs(source text and summary)and is split into three parts.PART I includes 2,400,591 pairs,PART II includes 10,666 pairs,and PART III includes 1,106 pairs.Besides,the pairs of PART II and PART III also have manual scores(according to the relevance between the source text and summary)ranging from 1 to 5.Following the previous work[8],we only chose pairs with scores no less than 3 and used PART I as the training set,PART II as the validation set,and PART III as the test set.

    4.2 Evaluation Metric and Experiment Setting

    We used the ROUGE scores[22]to evaluate our summarization model which has been widely used for text summarization.They can measure the quality of the summary by computing the overlap between the generated summary and the reference summary.Following the previous work[8],we used ROUGE-1(1-gram),ROUGE-2(bigrams),and ROUGE-L(longest common subsequence)scores as the evaluation metric of the experimental results.

    We used the Chinese glossary of BERT-base,which contains 21,128 characters,but the number we counted all the characters in PART I of LCSTS is 10,728.To reduce the computation,we only used the characters of the intersection between them,including 7,655 characters.In our model,we used the default embedding size 768 of BERT-base,the number of headsh=12,and the number of Transformer blocksN=12.For Capsule network,we set the number of output capsules to 50 and the output dimension to 16,and the number of routes to 3.We set the batch size to 16,and we used Dropout[23]in our model.Our model was trained on a single NVIDIA 2080Ti GPU.Following the previous work[24],we implemented the Beam Search and set the beam size to 3.

    4.3 Baseline Systems

    We have compared the proposed model with the following model’s ROUGE score,and we would briefly introduce them next.

    RNN and RNN-context[8]are two seq2seq baseline models.The former uses GRU as encoder and decoder.Based on that,the latter adds attention mechanism.

    CopyNet[25]is the attention-based seq2seq model with the copy mechanism.The copy mechanism allows some tokens of the generated summary to be copied from the source content and it can effectively improve the problem of abstractive summarization with repeated words.

    DRGD[26]is a seq2seq-based model with a deep recurrent generative decoder.The model combines the decoder with a variational autoencoder and uses a recurrent latent random model to learn latent structure information implied in the target summaries.

    WEAN[27]is a novel model based on the encoder-decoder framework and its full name is Word Embedding Attention Network.The model generates the words by querying distributed word representations,hoping to capture the meaning of the corresponding words.

    Seq2Seq+superAE[16]is a seq2seq-based model with an assistant supervisor.The assistant supervisor uses the representation of the summary to supervise that of the source content.And the model uses the autoencoder as an assistant supervisor.Besides,to determine the strength of supervision more dynamically,Adversarial Learning is introduced in the model.

    Table 2:ROUGE scores of our model and baseline systems on LCSTS(W:word level;C:character level)

    5 Results and Discussion

    For clearer clarification,we named the BERT with the modified mask matrix as BERTseq2seqLM,and denote our model with semantic supervision based on Capsule Network as SSC.

    After we compared our model with baseline systems,the experimental results of these models on LCSTS datasets are shown in Tab.2.Firstly,we compared our model with BERT-seq2seqLM,and it proved SSC outperformed BERT-seq2seqLM in the scores of ROUGE-1,ROUGE-2,and ROUGE-L.And it indicated that the semantic supervision method can improve the generation effect of Bert-seq2seqLM.Moreover,we compared the ROUGE scores of our model with the recent summarization systems and it showed that our model outperformed the baseline systems,and achieved higher scores on ROUGE-1 and ROUGE-2,while it was slightly lower than the baseline on ROUGE-L.

    Figure 3:ROUGE scores curve of BERT-seq2seqLM and our model under different epoch training(including ROUGE-1,ROUGE-2,ROUGE-L scores curve)

    In addition,we also compared the ROUGE scores of models under different epochs,as shown in Fig.3.It respectively contains the scores of ROUGE-1,ROUGE-2,and ROUGE-L of the models under different epochs.From the three subgraphs,we can see that the training effect of BERT-seq2seqLM is more stable and the overall evaluation score is higher after adding semantic supervision.

    As for semantic supervision,in addition to Capsule Network,we also tried to use LSTM and GRU.However,after comparative experiments,we found that Capsule Network was more suitable.As shown in Tab.3,we can see that the ROUGE-1,ROUGE-2 and ROUGE-L scores of the semantic supervision based on LSTM were higher than the BERT-seq2seqLM without the introduction of the semantic supervision.And the semantic supervision based on GRU and Capsule Network were also better than BERT-seq2-seqLM.Therefore,by experimental comparison,it is very necessary to introduce the semantic supervision method in BERT-seq2seqLM to improve the problem of fine-grained semantic representation.And the best improvement can be achieved by using Capsule Network for semantic supervision.

    Table 3:ROUGE scores of the semantic supervision network with different structures on the LCSTS

    Table 4:Some generated summary examples on the LCSTS test dataset

    As shown in Tab.4,we listed two examples of the test dataset generated by our model.These examples include the source text,the reference summary,the summary generated by the BERT-seq2seqLM model and the generated summary by our model.The first example is about smartphones and personal computers.The generation result of the bert-seq2seqLM model takes the frequently appearing word “iPhone” as the main body of the summary,which leads to the deviation.The second example is a summary of Mark Cuban’s life.From the source text,we can see that the last sentence is a summary of the whole article,but BERT-seq2seqLM chose the wrong content as the summary.BERT-seq2seqLM with semantic supervision can generate the content close to the reference summary.From the content of the generated summary,we can see that our semantic supervision method can get better results.By comparing the generated results,we can see that the semantic supervision method based on Capsule Network can reduce the semantic deviations of BERT encoding to some extent.

    6 Conclusion

    According to the idea of UNILM,we transformed the mask matrix of BERT-base to accomplish the abstractive summarization.At the same time,we introduced the semantic supervision method based on Capsule Network into our model and improve the performance of text summarization model on the LCSTS dataset.Experimental results showed that our model outperformed baseline systems.In this paper,Semantic Supervision method was only used in the pre-trained language model.As for other neural network models,we have not do experiments for verification yet.In this experiment,we only used the Chinese dataset and did not verify on other datasets.In the future,we will improve the semantic supervision method and experiments for its problems.

    Acknowledgement:We would like to thank all the researchers of this project for their effort.

    Funding Statement:This work was partially supported by the National Natural Science Foundation of China(Grant No.61502082)and the National Key R&D Program of China(Grant No.2018YFA0306703).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产有黄有色有爽视频| 国产亚洲av高清不卡| 美女扒开内裤让男人捅视频| 高清毛片免费观看视频网站 | 日本黄色视频三级网站网址| 欧美一级毛片孕妇| 日日干狠狠操夜夜爽| 日本精品一区二区三区蜜桃| 日本黄色视频三级网站网址| 国产亚洲精品一区二区www| 久久午夜综合久久蜜桃| 大码成人一级视频| 亚洲色图 男人天堂 中文字幕| 中文字幕人妻丝袜制服| 欧美黑人欧美精品刺激| 国产在线观看jvid| 精品电影一区二区在线| 午夜激情av网站| 久久青草综合色| 亚洲avbb在线观看| 三级毛片av免费| 桃红色精品国产亚洲av| 亚洲九九香蕉| 亚洲成人国产一区在线观看| 亚洲精品一区av在线观看| 丝袜在线中文字幕| 欧美不卡视频在线免费观看 | 国产成人一区二区三区免费视频网站| 色精品久久人妻99蜜桃| 亚洲性夜色夜夜综合| 好男人电影高清在线观看| 麻豆成人av在线观看| 国产熟女xx| 精品国产乱子伦一区二区三区| 免费久久久久久久精品成人欧美视频| 一级作爱视频免费观看| 亚洲七黄色美女视频| 欧美日本中文国产一区发布| 日韩精品免费视频一区二区三区| www.自偷自拍.com| 女人被躁到高潮嗷嗷叫费观| 亚洲欧美日韩另类电影网站| 超碰成人久久| 韩国av一区二区三区四区| 欧美大码av| 久久精品国产99精品国产亚洲性色 | 免费观看精品视频网站| 欧美成人免费av一区二区三区| 视频区图区小说| 亚洲人成电影观看| 欧美成人性av电影在线观看| 嫩草影院精品99| 久久精品影院6| 岛国视频午夜一区免费看| 日韩精品免费视频一区二区三区| 亚洲专区中文字幕在线| 亚洲一区二区三区色噜噜 | 欧美亚洲日本最大视频资源| 又紧又爽又黄一区二区| 日韩免费高清中文字幕av| 一区二区三区国产精品乱码| 欧美乱码精品一区二区三区| 久久人妻av系列| 精品日产1卡2卡| 女警被强在线播放| 香蕉国产在线看| 国产精品偷伦视频观看了| 国产精品久久视频播放| 99国产综合亚洲精品| 成人亚洲精品一区在线观看| 免费高清在线观看日韩| 久久99一区二区三区| 亚洲欧美一区二区三区黑人| 欧美激情高清一区二区三区| 天天添夜夜摸| 美女高潮到喷水免费观看| 老汉色av国产亚洲站长工具| 一二三四社区在线视频社区8| 亚洲熟妇熟女久久| 欧美黑人欧美精品刺激| 国产黄a三级三级三级人| 天堂动漫精品| 一二三四社区在线视频社区8| 国产精品一区二区三区四区久久 | 久久久久久久精品吃奶| 91av网站免费观看| 黄色视频,在线免费观看| 叶爱在线成人免费视频播放| 亚洲人成77777在线视频| av在线天堂中文字幕 | 18美女黄网站色大片免费观看| 精品免费久久久久久久清纯| 精品免费久久久久久久清纯| 国产成人精品无人区| 不卡一级毛片| 欧美日韩亚洲国产一区二区在线观看| av福利片在线| 性色av乱码一区二区三区2| 欧美日韩乱码在线| 免费女性裸体啪啪无遮挡网站| 中文字幕人妻丝袜制服| 久久精品91无色码中文字幕| 男人的好看免费观看在线视频 | 超色免费av| 中国美女看黄片| 欧美成狂野欧美在线观看| 亚洲五月婷婷丁香| 后天国语完整版免费观看| 久久99一区二区三区| 久久久久久久久久久久大奶| 交换朋友夫妻互换小说| 18禁裸乳无遮挡免费网站照片 | 精品久久蜜臀av无| av片东京热男人的天堂| 99久久精品国产亚洲精品| 满18在线观看网站| 亚洲精品美女久久av网站| 男女床上黄色一级片免费看| 免费高清视频大片| av视频免费观看在线观看| 精品一品国产午夜福利视频| 久久精品影院6| av有码第一页| 中文字幕另类日韩欧美亚洲嫩草| 啪啪无遮挡十八禁网站| 成年人黄色毛片网站| 99久久精品国产亚洲精品| 国产激情久久老熟女| 精品熟女少妇八av免费久了| 99riav亚洲国产免费| 麻豆国产av国片精品| 日本一区二区免费在线视频| 成人国语在线视频| 老司机福利观看| 日本三级黄在线观看| 久久热在线av| 在线国产一区二区在线| 国产蜜桃级精品一区二区三区| 两个人看的免费小视频| 欧美大码av| 男男h啪啪无遮挡| 欧美不卡视频在线免费观看 | 精品久久久久久电影网| 亚洲黑人精品在线| 国产精品 国内视频| 操出白浆在线播放| 自拍欧美九色日韩亚洲蝌蚪91| 成人18禁高潮啪啪吃奶动态图| 可以免费在线观看a视频的电影网站| 欧美日韩国产mv在线观看视频| 在线观看66精品国产| 最好的美女福利视频网| 久久久久久大精品| 中文字幕人妻熟女乱码| 夜夜躁狠狠躁天天躁| 丰满饥渴人妻一区二区三| 欧美黑人精品巨大| 黄色视频,在线免费观看| 男人的好看免费观看在线视频 | 国产精品久久电影中文字幕| 91大片在线观看| 丝袜美足系列| 国产乱人伦免费视频| 精品电影一区二区在线| 最好的美女福利视频网| 午夜影院日韩av| avwww免费| 午夜福利在线免费观看网站| 午夜福利在线观看吧| 久久香蕉激情| av网站在线播放免费| 日韩大尺度精品在线看网址 | 国产激情欧美一区二区| 久久 成人 亚洲| 咕卡用的链子| 亚洲专区国产一区二区| 一区二区三区激情视频| 18禁观看日本| 精品国产美女av久久久久小说| 女性生殖器流出的白浆| 黄频高清免费视频| 亚洲一区二区三区欧美精品| 日本一区二区免费在线视频| 色尼玛亚洲综合影院| 欧美黑人精品巨大| 久久精品影院6| 亚洲九九香蕉| 成年人黄色毛片网站| 色综合欧美亚洲国产小说| 国产精品 欧美亚洲| 真人做人爱边吃奶动态| 老熟妇乱子伦视频在线观看| 久久久久久人人人人人| 99re在线观看精品视频| 久久久久久久午夜电影 | 国产欧美日韩综合在线一区二区| 欧美中文日本在线观看视频| 夜夜躁狠狠躁天天躁| 久久国产精品影院| 午夜精品国产一区二区电影| 99re在线观看精品视频| 19禁男女啪啪无遮挡网站| 国产成人精品久久二区二区91| 国产极品粉嫩免费观看在线| 精品久久蜜臀av无| 精品国内亚洲2022精品成人| 欧美日韩瑟瑟在线播放| 亚洲精品成人av观看孕妇| 久久人人精品亚洲av| 午夜福利,免费看| 久久人妻福利社区极品人妻图片| 一区在线观看完整版| 国产欧美日韩一区二区精品| 夜夜夜夜夜久久久久| 久久热在线av| 成年版毛片免费区| 国产高清视频在线播放一区| 宅男免费午夜| 国产精品日韩av在线免费观看 | 久久精品人人爽人人爽视色| 亚洲色图 男人天堂 中文字幕| 少妇的丰满在线观看| 精品国产乱子伦一区二区三区| 国产乱人伦免费视频| 日本欧美视频一区| 十八禁网站免费在线| 日韩人妻精品一区2区三区| 亚洲第一欧美日韩一区二区三区| 久久国产亚洲av麻豆专区| 少妇 在线观看| 亚洲欧美日韩无卡精品| 90打野战视频偷拍视频| 国产精品成人在线| 久久精品国产99精品国产亚洲性色 | 无遮挡黄片免费观看| 久久精品人人爽人人爽视色| 十八禁人妻一区二区| 国产精品久久电影中文字幕| 亚洲专区字幕在线| 欧美人与性动交α欧美软件| 桃色一区二区三区在线观看| 精品国产一区二区久久| 亚洲专区国产一区二区| 亚洲片人在线观看| 9热在线视频观看99| 亚洲精品一卡2卡三卡4卡5卡| 神马国产精品三级电影在线观看 | 亚洲av日韩精品久久久久久密| 一边摸一边抽搐一进一小说| 成人三级做爰电影| 又黄又粗又硬又大视频| 久久欧美精品欧美久久欧美| 亚洲成av片中文字幕在线观看| 电影成人av| 日本vs欧美在线观看视频| 黄色a级毛片大全视频| 99国产精品一区二区三区| 久久久久亚洲av毛片大全| 99在线人妻在线中文字幕| 大香蕉久久成人网| 精品日产1卡2卡| 亚洲自偷自拍图片 自拍| 每晚都被弄得嗷嗷叫到高潮| 两个人免费观看高清视频| av在线天堂中文字幕 | 我的亚洲天堂| 国产亚洲av高清不卡| 亚洲视频免费观看视频| 国产精品久久视频播放| 校园春色视频在线观看| 欧美日韩国产mv在线观看视频| 韩国精品一区二区三区| 亚洲久久久国产精品| 午夜精品在线福利| 黑人欧美特级aaaaaa片| 亚洲va日本ⅴa欧美va伊人久久| 丁香欧美五月| 国产免费现黄频在线看| 国产成人av教育| 脱女人内裤的视频| 侵犯人妻中文字幕一二三四区| 国产一区在线观看成人免费| 中文字幕人妻丝袜制服| 国产区一区二久久| 午夜日韩欧美国产| 国产成人欧美| 国产欧美日韩精品亚洲av| 成人手机av| 精品第一国产精品| 欧美日韩黄片免| 亚洲自偷自拍图片 自拍| 久久久久国产精品人妻aⅴ院| 巨乳人妻的诱惑在线观看| 性色av乱码一区二区三区2| www国产在线视频色| 欧美乱码精品一区二区三区| 色在线成人网| 男女之事视频高清在线观看| 性欧美人与动物交配| 日日摸夜夜添夜夜添小说| 每晚都被弄得嗷嗷叫到高潮| 国产三级黄色录像| 久久久精品国产亚洲av高清涩受| 国产成人精品久久二区二区免费| 天天躁夜夜躁狠狠躁躁| 女生性感内裤真人,穿戴方法视频| 香蕉国产在线看| 一级毛片高清免费大全| 午夜视频精品福利| 久久久水蜜桃国产精品网| 久久久国产成人免费| 黄片播放在线免费| 成人特级黄色片久久久久久久| 国产精品免费一区二区三区在线| 真人一进一出gif抽搐免费| 国产99久久九九免费精品| 热re99久久精品国产66热6| 十八禁网站免费在线| 久99久视频精品免费| 久久国产亚洲av麻豆专区| 黑人巨大精品欧美一区二区蜜桃| 丝袜美腿诱惑在线| 亚洲欧美精品综合久久99| 九色亚洲精品在线播放| 国产高清激情床上av| 久久精品国产亚洲av高清一级| 国产单亲对白刺激| 亚洲色图av天堂| av视频免费观看在线观看| 久久九九热精品免费| 黄片播放在线免费| 91字幕亚洲| 在线天堂中文资源库| 国产欧美日韩综合在线一区二区| 亚洲av成人一区二区三| 国产野战对白在线观看| а√天堂www在线а√下载| 黑人猛操日本美女一级片| 亚洲成人免费电影在线观看| 久久精品影院6| 久久久久国产一级毛片高清牌| 亚洲欧美精品综合一区二区三区| 80岁老熟妇乱子伦牲交| a在线观看视频网站| 欧美最黄视频在线播放免费 | 啦啦啦免费观看视频1| 最近最新免费中文字幕在线| 在线播放国产精品三级| 一级毛片高清免费大全| 亚洲伊人色综图| 国产精品av久久久久免费| 自拍欧美九色日韩亚洲蝌蚪91| 老司机亚洲免费影院| 久久国产精品男人的天堂亚洲| 免费搜索国产男女视频| 正在播放国产对白刺激| 老汉色∧v一级毛片| 99热只有精品国产| 99香蕉大伊视频| 国产熟女xx| 天天影视国产精品| 亚洲黑人精品在线| 人人妻人人添人人爽欧美一区卜| 一进一出抽搐gif免费好疼 | 精品国产乱子伦一区二区三区| 老熟妇仑乱视频hdxx| 亚洲狠狠婷婷综合久久图片| 大码成人一级视频| 在线观看www视频免费| 国产精品九九99| 久久久精品国产亚洲av高清涩受| 老司机深夜福利视频在线观看| 男人舔女人下体高潮全视频| 黄片大片在线免费观看| 国产成人免费无遮挡视频| 女人高潮潮喷娇喘18禁视频| 黄色视频,在线免费观看| 午夜成年电影在线免费观看| 久久久久久免费高清国产稀缺| 成在线人永久免费视频| 一级片'在线观看视频| 中文亚洲av片在线观看爽| 19禁男女啪啪无遮挡网站| aaaaa片日本免费| 精品国产国语对白av| 久久狼人影院| 国产精品久久久久久人妻精品电影| 亚洲情色 制服丝袜| 男人舔女人的私密视频| 极品教师在线免费播放| 黑人猛操日本美女一级片| 12—13女人毛片做爰片一| 狠狠狠狠99中文字幕| xxx96com| 欧美日韩亚洲高清精品| 极品教师在线免费播放| 国产高清videossex| 国产欧美日韩一区二区三区在线| 男女之事视频高清在线观看| 国产片内射在线| 韩国精品一区二区三区| 男女下面插进去视频免费观看| xxx96com| 欧美日本亚洲视频在线播放| 亚洲国产欧美一区二区综合| 精品一区二区三区av网在线观看| 国产精品影院久久| 中亚洲国语对白在线视频| 成人影院久久| 黑人巨大精品欧美一区二区蜜桃| 久久人妻福利社区极品人妻图片| www.熟女人妻精品国产| 一进一出抽搐gif免费好疼 | 91九色精品人成在线观看| 午夜福利欧美成人| 国产精品成人在线| 无限看片的www在线观看| 国产一卡二卡三卡精品| 国产成人一区二区三区免费视频网站| 女性被躁到高潮视频| 亚洲男人天堂网一区| 精品无人区乱码1区二区| 国产精品亚洲一级av第二区| 午夜福利在线免费观看网站| 亚洲欧美激情综合另类| 在线十欧美十亚洲十日本专区| 亚洲av美国av| 免费不卡黄色视频| 国产乱人伦免费视频| 欧美日韩中文字幕国产精品一区二区三区 | 法律面前人人平等表现在哪些方面| www日本在线高清视频| 欧美黄色片欧美黄色片| 热99re8久久精品国产| 久久香蕉精品热| 亚洲九九香蕉| 精品久久久久久久久久免费视频 | 99国产综合亚洲精品| 国产av一区在线观看免费| 国产深夜福利视频在线观看| 久久天躁狠狠躁夜夜2o2o| 精品高清国产在线一区| 欧美性长视频在线观看| 精品电影一区二区在线| 免费人成视频x8x8入口观看| 国产亚洲精品综合一区在线观看 | 淫秽高清视频在线观看| 欧美中文综合在线视频| 免费看a级黄色片| 欧美成狂野欧美在线观看| 99国产精品一区二区三区| 亚洲精品粉嫩美女一区| 亚洲avbb在线观看| 久久欧美精品欧美久久欧美| 亚洲黑人精品在线| 国产熟女xx| 免费高清视频大片| 成人18禁在线播放| 亚洲国产欧美日韩在线播放| 啪啪无遮挡十八禁网站| 91在线观看av| 19禁男女啪啪无遮挡网站| 亚洲国产中文字幕在线视频| 国产精品影院久久| 后天国语完整版免费观看| 老司机在亚洲福利影院| 免费女性裸体啪啪无遮挡网站| 日日爽夜夜爽网站| 久久狼人影院| 可以免费在线观看a视频的电影网站| 一进一出抽搐gif免费好疼 | 欧美日韩国产mv在线观看视频| 精品乱码久久久久久99久播| 可以免费在线观看a视频的电影网站| 999久久久国产精品视频| 99riav亚洲国产免费| 99国产精品一区二区三区| 久久久精品欧美日韩精品| 看片在线看免费视频| 亚洲激情在线av| 别揉我奶头~嗯~啊~动态视频| 亚洲一区二区三区不卡视频| 亚洲成av片中文字幕在线观看| 久久草成人影院| 12—13女人毛片做爰片一| 中国美女看黄片| xxx96com| 国产欧美日韩一区二区三区在线| 国产精品久久久久久人妻精品电影| 国产熟女xx| 十八禁网站免费在线| 日本三级黄在线观看| 成人特级黄色片久久久久久久| 91av网站免费观看| 免费在线观看日本一区| 嫩草影视91久久| av超薄肉色丝袜交足视频| 久久精品亚洲熟妇少妇任你| 淫秽高清视频在线观看| xxx96com| 国产av一区二区精品久久| 中文字幕色久视频| ponron亚洲| 后天国语完整版免费观看| 麻豆成人av在线观看| 一个人观看的视频www高清免费观看 | 亚洲三区欧美一区| 一夜夜www| 可以在线观看毛片的网站| 他把我摸到了高潮在线观看| 色综合欧美亚洲国产小说| 亚洲专区中文字幕在线| 日韩大码丰满熟妇| 欧美人与性动交α欧美软件| 淫秽高清视频在线观看| av国产精品久久久久影院| 岛国视频午夜一区免费看| 色尼玛亚洲综合影院| 日韩 欧美 亚洲 中文字幕| 国产97色在线日韩免费| 婷婷精品国产亚洲av在线| 婷婷丁香在线五月| 精品少妇一区二区三区视频日本电影| 国产精品自产拍在线观看55亚洲| 国产精品一区二区免费欧美| 两个人免费观看高清视频| 久久精品国产99精品国产亚洲性色 | 91国产中文字幕| 手机成人av网站| 欧美激情高清一区二区三区| 国产成人精品久久二区二区免费| 欧美激情 高清一区二区三区| 午夜成年电影在线免费观看| 精品一区二区三卡| 精品欧美一区二区三区在线| 久久欧美精品欧美久久欧美| av福利片在线| 12—13女人毛片做爰片一| 成人黄色视频免费在线看| 夜夜看夜夜爽夜夜摸 | 亚洲av美国av| 久久午夜综合久久蜜桃| 国产精品久久久人人做人人爽| 精品久久蜜臀av无| 亚洲 国产 在线| bbb黄色大片| 免费在线观看影片大全网站| 国产av又大| 少妇粗大呻吟视频| 欧美精品一区二区免费开放| 亚洲 欧美 日韩 在线 免费| 日韩大码丰满熟妇| 变态另类成人亚洲欧美熟女 | 免费搜索国产男女视频| 水蜜桃什么品种好| 国产精品 国内视频| 亚洲欧美精品综合一区二区三区| 亚洲国产欧美一区二区综合| 日韩免费高清中文字幕av| 久久人妻福利社区极品人妻图片| 午夜91福利影院| 日韩视频一区二区在线观看| 香蕉久久夜色| 黄频高清免费视频| 丝袜美足系列| 99国产综合亚洲精品| 香蕉久久夜色| 午夜福利在线免费观看网站| 午夜福利影视在线免费观看| 亚洲七黄色美女视频| 激情在线观看视频在线高清| 色精品久久人妻99蜜桃| 女同久久另类99精品国产91| 在线观看免费日韩欧美大片| 妹子高潮喷水视频| 精品熟女少妇八av免费久了| 亚洲 国产 在线| 桃色一区二区三区在线观看| 在线观看一区二区三区激情| √禁漫天堂资源中文www| 男女下面进入的视频免费午夜 | 老熟妇乱子伦视频在线观看| 精品免费久久久久久久清纯| 老司机深夜福利视频在线观看| 午夜亚洲福利在线播放| 日韩精品中文字幕看吧| 在线看a的网站| 一边摸一边抽搐一进一出视频| 久久亚洲真实| 黑人巨大精品欧美一区二区mp4| 制服诱惑二区| 久久久久久免费高清国产稀缺| 午夜精品久久久久久毛片777| 亚洲全国av大片| 欧美黄色淫秽网站| av在线播放免费不卡| 国产成人欧美在线观看| 午夜a级毛片| svipshipincom国产片| 1024香蕉在线观看| 精品一区二区三区av网在线观看| 宅男免费午夜| 夜夜躁狠狠躁天天躁| 成年人免费黄色播放视频| 亚洲国产欧美网| 天堂中文最新版在线下载| 国产免费男女视频| 一级片免费观看大全| 亚洲熟女毛片儿| 国产国语露脸激情在线看| 欧美日韩亚洲高清精品| 欧美日韩福利视频一区二区| 欧美日韩黄片免| 欧美日韩视频精品一区| 午夜两性在线视频|