• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Research on system combination of machine translation based on Transformer①

    2023-09-12 07:30:02LIUWenbin劉文斌HEYanqingLANTianWUZhenfeng
    High Technology Letters 2023年3期

    LIU Wenbin(劉文斌),HE Yanqing, LAN Tian,WU Zhenfeng

    (Research Center for Information Science Theory and Methodology, Institute of Scientific and Technical Information of China, Beijing 100038, P.R.China)

    Abstract Influenced by its training corpus, the performance of different machine translation systems varies greatly.Aiming at achieving higher quality translations, system combination methods combine the translation results of multiple systems through statistical combination or neural network combination.This paper proposes a new multi-system translation combination method based on the Transformer architecture, which uses a multi-encoder to encode source sentences and the translation results of each system in order to realize encoder combination and decoder combination.The experimental verification on the Chinese-English translation task shows that this method has 1.2 -2.35 more bilingual evaluation understudy (BLEU) points compared with the best single system results, 0.71 -3.12 more BLEU points compared with the statistical combination method, and 0.14 -0.62 more BLEU points compared with the state-of-the-art neural network combination method.The experimental results demonstrate the effectiveness of the proposed system combination method based on Transformer.

    Keywords:Transformer, system combination, neural machine translation(NMT), attention mechanism, multi-encoder.

    0 Introduction

    Machine translation refers to the process of translating one language into another with a computer.Both statistical machine translation (SMT) and neural machine translation (NMT) are particularly dependent on their training corpus.Translation models show varying performance with training data of different sources, and their translation effects are rather spotty.Combining the translation results of multiple systems can not only improve the generalization ability of the translation model, but also expand the translation hypothesis space.Therefore, system combination usually produces a performance that is commensurate with or even better than a single translation model.

    At present, combining machine translation systems is mainly achieved in one of two ways:one is based on statistics,and the other is based on neural networks.The statistical method designs multiple features and uses a voting mechanism to combine the translation results from multiple systems at the sentence level, phrase level, or word level.These methods focus on the potential mapping relationship between the translation hypotheses of multiple systems and the search space of a single system[1-5].However, these methods are not end-to-end modeling methods, and there are some error propagation problems in the combination process.Neural network combination includes two mechanisms:model-level combination and parameter-level combination.The model-level combination method involves the translation results of multiple machine translation systems, adopts a recurrent neural network (RNN) encoding-decoding neural network architecture and end-to-end modeling, and fuses the source context information in the encoder and decoder.The parameter-level combination method combined with the prediction probabilities of multiple decoders can predict the translation at the next moment, instead of absorbing the translation results of multiple machine translation systems; only the decoding strategy of the model average or model ensemble within the system is adopted to fuse the system.

    Different from the above methods, this paper draws on the research methods of paragraph translation[6-7], and proposes a multi-system translation combination method based on the Transformer architecture,which uses a multi-encoder to encode the source language sentences and the translation results of each system to realize encoder combination and decoder combination.Encoder combination transforms the hidden information of the multi-system translation into a new representation through an attention network, and the hidden layer information of synonymous sentences is fused through a gating mechanism at the encoder side.Decoder combination calculates the attention of the hidden layer information of the multi-system translation and the source language sentence at the decoder side to obtain the fusion vectors, thus obtaining a combination translation with higher quality.

    In this work, experimental verification of the proposed method on a Chinese-English translation task shows that compared with the best single system results, this method has 1.2 -2.35 more bilingual evaluation understudy (BLEU) points, 0.71 -3.12 more BLEU points compared with the statistical combination method, and 0.14 -0.62 more BLEU points compared with the neural network combination method.The experimental results show that the combination method of machine translation system based on Transformer can effectively improve the translation quality.

    The main contributions of this study can be summarized as follows.

    (1) The proposed neural network system combination method based on Transformer introduces the source language and the translation assumptions of multiple systems into the Transformer architecture for combination.

    (2) The multi-encoder method is proposed to encode the translation results of multiple systems, and achieve two different combination models of system translation, namely encoder combination and decoder combination.

    1 Related work

    Research on the combination of machine translation systems started in the 1990s, and the early combination technology was mainly based on statistical methods.According to the different levels of target translation in the combination process, the statistical combination method can be divided into three categories:(1)Sentence-level system combination.Taking sentences as the smallest unit, sentence-level system combination recalculates the translation result score of multiple systems of the same source language sentences by using minimum Bayes risk decoding or a logarithmic -linear model[8].Theoretically, the method of sentence-level system combination will not produce new translation hypotheses, but only choose the best one among existing translation hypotheses.(2) Phrase-level system combination.Different from sentence-level system combination, the core idea of phrase-level system combination method is to use a phrase table for further decoding[9].(3) Word-level system combination, which takes words as the minimum unit.First, a confusion network is constructed by using the word alignment method of sentence pairs in the same language[10], the confidence of candidate words in each position of the confusion network is estimated, and then the confusion network is decoded.Compared with sentence-level system combination and phrase-level system combination,word-level system combination is more effective because of its finer combination granularity and more accurate word alignment, thus showing huge performance advantages.However, the statistical combination method is not an end-to-end modeling method, and there are some error propagation problems in the combination process.

    Due to the growing popularity of NMT since it was proposed in 2015, the system combination method has also turned into a new neural network pattern that contains two mechanisms of neural network combination at the model-level and parameter-level.Model-level combination incorporates the translation results of multiple machine translation systems, and then builds a neural network architecture to realize end-to-end modeling.Ref.[11]adopted a long short-term memory (LSTM)network to employ multiple encoders on the source end, where each encoder corresponds to source language sentences in different languages, and the target end contains a decoder which uses the sum of the last layer states of multiple encoders on the source end to initialize the state hidden layer vector on the target end.Ref.[12] adopted a multisource translation strategy, inputting source statements in different languages at the input level, and averaging the probability distributions of target words generated in different languages at the output level, to reduce the errors of the model’s predicted probabilities.Based on the research of Ref.[12], Ref.[13] achieved combination by dynamically controlling the contribution of different translation models to the target-side probability prediction through a gating mechanism.Ref.[14] proposed a neural system combination framework leveraging multisource NMT.Ref.[15] used an RNN to classify and expand the system combination method and carried out experimental comparisons five ways, including average combination, weight combination, splicing combination,gate mechanism combination, and attention combination.Ref.[16] proposed a deep-neural-network-based machine translation system combination.Ref.[17] proposed an approach to model voting for system combination in machine translation.

    Parameter-level combination combines the prediction probability of multiple decoders within the same codec framework to predict the translation at the next moment.Without storing translation results of multiple machine translation systems, it only uses the decoding strategy of a model ensemble[18-19]and model average[20]in the system to fuse at the model parameter level.Ref.[21] improved the Transformer model by using a convolutional neural network (CNN) and gating mechanism and guided the optimization of model parameters using confrontation training, then reorganized and merged the output from multiple machine translations into a single improved translation result through multi-model fusion.Ref.[22] presented a hybrid framework for developing a system combination for the Uyghur-Chinese machine translation task that works in three layers to transmit the outputs of the first layer and the second layer into the final layer to make better predictions.Ref.[23] proposed an NMT model that treated the generated machine translation outputs as an approximate contextual environment of the target language, and then re-decoded each token in the machine translation output successively.

    In this paper, a model-level neural network combination method based on Transformer is adopted.Different from previous model-level work, this paper takes the multi-encoder method to encode the source language sentences,merges the translation results of various systems,and selects the Transformer neural network architecture instead of the previous RNN and LSTM neural networks.Compared with parameter-level combination,the proposed method dynamically constructs the attention between the multi-system translation and the source language sentence,as well as the attention between the multi-system translation and the target language sentence, providing additional contextual information and finally realizing the combination of multi-system translation.

    2 Model

    2.1 Problem definition

    2.2 System combination based on transformer

    In 2017, Ref.[24] proposed a complete attention mechanism encoder-decoder structure, Transformer, to realize machine translation.In the Transformer structure, the encoder consists of six layers of networks with the same structure.Each layer is composed of two parts.The first part is multi-head self-attention, and the second part is a position-wise feed-forward network, which is a fully connected layer.Both parts have a residual connection and layer normalization.The decoder also consists of six layers of the same network stack, where each layer is composed of three parts,namely multi-head self-attention, multi-head encoder-decoder attention, and a position-wise feed-forward network.Like the encoder, each part has a residual connection and layer normalization.

    In this paper, research on translation with document-level context is based on the work of Ref.[25].Transformer is also used in multi-system translation combination, and the overall model architecture is shown in Fig.1.The model still uses an encoder-decoder structure, and the single encoding layer and decoding layer are the same as that in the original structure.In each sublayer, a residual connection and layer normalization are used.The encoder is a multi-encoder, which accepts the input of source language sentences and multiple system translations at the same time, and encodes the multi-system translations and source sentences as intermediate hidden layer vectors.That is, a source-sentence encoder and a multipletranslation encoder are obtained.The decoder decodes the target language words one by one according to the intermediate vectors to form a combined translation.Translation attention is introduced on the source language encoder side and the target language decoder side, aiming at making full use of the encoding information of the multi-system translation in the encoder and decoder for attention combination.

    Fig.1 Model architecture

    In Transformer, the attention mechanism can be regarded as a process of calculating a given series of queriesQand a series of key-value pairsKandV, obtaining the weight ofVthrough the calculation ofQandK, and then carrying out the weighted sum ofV:

    where,dKrepresents the dimension ofK,and 64 is used by default.The multi-head attention mechanism used in the Transformer model can simultaneously ‘notice’multiple different locations and capture different levels of information.In the encoder’s self-attention,Q,K,andVare all from the output of the previous layer of the encoder, and in the decoder’s self-attention,Q,K,andVare all from the output of the previous layer of the decoder.In the encoder-decoder attention,Qcomes from the output of the previous layer of the decoder,whileKandVcome from the encoder.

    Using Transformer-based system combination, this paper introduces two ways of combining translation information into Transformer.

    2.2.1 Encoder combination

    Encoder combination model is as shown in Fig.2,the encoder combination uses multiple system translations, and then converts the system translations into new representations through the attention network, integrating the hidden layer information of homologous language sentences for attention fusion through the gating mechanism in the encoder.In the encoder combination mode and in the self-attentionof the multi-system translation encoder,Q,K, andVare all from the upper layer output of the multi-system translation encoder;in the self-attentionof the source language encoder,Q,K, andVare all from the upper layer output of the source language encoder.In the translation attention of the source language encoder, bothKandVcome from the upper hidden layer stateHTrof the multi-system translation encoder, andQcomes from the upper layer hidden stateHsof the source language encoder.The hidden state of the translation attention part of the encoder,H, is given as

    Fig.2 Encoder combination model

    where,Hsrepresents the hidden state of the source language sentence, andHTrrepresents the hidden state of the multi-system translation.

    2.2.2 Decoder combination

    As shown in Fig.3,the decoder combination method combines the hidden layer information of multiple encoders with the attention in the decoder.The decoder can process multiple encoders separately, and then fuse them using the gating mechanism inside the decoder to obtain the combined vector.In the decoder combination mode and in the self-attention of the target language decoder,Q,K,andVare all from the output of the previous layer of the target language decoder; in the translation attention of the target language decoder,Qcomes from the output of the upper layer of the target language decoder,Kcomes from the upper hidden layer stateHsof the source language encoder, andVcomes from the upper hidden layer stateHTrof the multi-system translation encoder.In the encoder-decoder attention of the target language decoder,Qcomes from the upper layer output of the target language decoder andKandVcome from the previous output of the source language encoder.H, the hidden state of the translation attention part of the decoder, is given as

    Fig.3 Decoder combination model

    whereHsrepresents the hidden layer state of the source language sentence,HTrrepresents the hidden layer state of the multi-system translation, andHDecoderrepresents the hidden layer state of the upper layer output of the decoder.

    3 Experiments

    The experiments on the Chinese-English translation task are performed, where the implementation was based on Fairseq[26]and the evaluation metric was the case-insensitive BLEU[27].

    3.1 Data and settings

    This paper evaluated the combination approach using several publicly available data sets.The training data consisted of 20 ×104pairs of sentences randomly extracted from NEU2017 of the Chinese-English translation tracks in CCMT2020.The NJU-newsdev 2017 data set and NJU-newstest2017 data set are also used as a validation set and test set, respectively.The NIST 2003 -2006 Chinese-English data set was also used as a test set.All sentences in the data were preprocessed with the Urheen[28]tokenizer, which used byte pair encoding[29]with 32k merged operations to segment words into subword units.The data statistics of each data set can be seen in Table 1.Adam[30]was used for optimization, and the systems are trained by using one to four GPUs.The learning rate strategy was the same as that used in Ref.[24].

    Data set Source Scale Train NEU2017 20 ×104 Valid NJU-newsdev2017 2002 NJU-newstest2017 1000 Test NIST 2003 919 NIST 2004 1788 NIST 2005 1082 NIST 2006 1664

    3.2 Training Details

    In order to verify the effectiveness of this method,three groups of experiments were set up for comparison:combination model versus single system model,combination model versus statistical combination model, and combination model versus neural network combination model.For the single system model, the same data set was used to train the two Chinese-English NMT systems, Sys1 and Sys2, based on Fairseq with the same model but different initialization seeds.Some parameter settings of the training model are shown in Table 2.For the statistical combination model, it can be implemented the combination of five different word alignment methods based on the trained single system model[2], namely word alignment based on the word error rate (WER), word alignment based on the translation error rate (TER), word alignment based on word ordering (WRA), word alignment based on an indirect hidden Markov model (IHMM), and word alignment based on an incremental hidden Markov model (INCIHMM).For the neural network combination model, the decoding strategy of model averaging and model integration based on the trained single system model for combination were implemented.

    Parameter Value lr 0.0007 dropout 0.3 max_tokens 3000 max_epoch 30 adam_betas (0.9,0.997)warmup 4000

    3.3 Results and discussion

    Compare the proposed neural combination system with the best individual engines, the statistical combination system,and the neural network combination system.The BLEU points of the different models for the development data and test data are shown in Tables 3 to 5.

    As shown in Table 3, on the one hand, the performance of single system Sys2 was superior to Sys1,with an average of 0.69 BLEU points higher.Compared with Sys1, the encoder combination model had 1.73 -3.19 more BLEU points, with an average increase of 2.41 BLEU points,while the decoder combination model had 1.09-2.18 more BLEU points, with an average increase of 1.61 BLEU points.Compared with Sys2, the encoder combination model had 1.84-3.36 more BLEU points, with an average increase of 2.58 BLEU points,while the decoder combination model had 1.2 -2.35 more BLEU points, with an average increase of 1.78 BLEU points.On the other hand, except for the test set NIST03, the BLEU scores of the five statistical combination models were all better than those of the two single system models.Among the five statistical combination models,the INCIHMM model performed the best,while the WRA model performed the worst.The INCIHMM model had 0.53 more BLEU points than the Sys2 model,on average.For the test set NIST03,the combination results of the five methods of statistical combination were all lower than the results of the single system before combination.

    After analyzing the results, it was found that the distributions of the NIST 03 dataset and the development set were far from each other.Therefore, in the statistical combination model,the parameters of the development set failed to guide NIST 03 to combine well.Compared with the INCIHMM model, the encoder combination model had significantly more BLEU points(0.6 -1.67),with an average increase of 1.06 BLEU points.The decoder combination model had 0.71 -1.84 more BLEU points, with an average increase of 1.24 BLEU points.Compared with the WRA model,the encoder combination model had significantly moreBLEU points (1 -2.33), with an average increase of 1.71 BLEU points.The decoder combination model had 1.11 -2.5 more BLEU points, with an average increase of 1.89 BLEU points.Overall, the method proposed in this paper had 0.71 -3.12 more BLEU points compared with the statistical combination method, indicating that the encoder combination and decoder combination models were better than the statistical combination models.

    System Valid Test NIST03 NIST04 NIST05 NIST06 Ave Sys1 10.35 12.13 19.83 22.02 19.48 18.53 14.62 Sys2 10.99 12.40 20.16 23.39 20.66 19.54 15.31 WRA 11.08 13.00 17.66 22.90 20.01 19.39 14.86 WER 11.38 13.32 18.43 23.70 20.70 19.92 15.35 IHMM 11.42 13.49 17.53 22.93 20.36 20.09 15.12 INCIHMM 11.48 13.46 18.86 23.82 20.82 20.05 15.50 TER 11.32 13.25 17.67 22.40 20.02 19.83 14.93 Encoder 12.08 14.28 21.86 25.04 21.81 21.72 16.68 Decoder 12.19 14.46 21.98 25.17 22.14 21.89 16.83

    System Valid Test NIST03 NIST04 NIST05 NIST06 Ave Sys1 10.35 12.13 19.83 22.02 19.48 18.53 14.62 Sys2 10.99 12.40 20.16 23.39 20.66 19.54 15.31 Average 12.03 14.04 21.10 24.49 21.61 21.25 16.36 Ensemble 12.05 14.11 21.47 24.55 21.86 21.32 16.48 Encoder 12.08 14.28 21.86 25.04 21.81 21.72 16.68+Average 12.16 14.47 21.92 25.13 22.45 21.96 16.87+Ensemble 12.10 14.22 22.09 24.99 22.27 21.77 16.78 Decoder 12.19 14.46 21.98 25.17 22.14 21.89 16.83+Average 12.29 14.53 21.95 25.20 22.14 21.94 16.86+Ensemble 12.18 14.45 21.92 25.22 22.19 21.74 16.81

    As shown in Table 4, in the neural network combination model, the decoding strategies of the model average and the model ensemble were also superior to the single system model for the development set and the test set.The average result of the model was 0.94 -1.71 BLEU points higher than that of Sys2, with an average increase of 1.23 BLEU points.The result of the model ensemble was 1.06 -1.78 BLEU points higher than that of Sys2, with an average increase of 1.37 BLEU points.In this paper, the results of the encoder combination model were 0.05 - 0.76 BLEU points higher than the average result of the model, and the average increase was 0.38 BLEU points.The results of the decoder combination model were 0.16 - 0.88 BLEU points higher than the average result of the model, with an average increase of 0.55 BLEU points.The results of the decoder combination model were 0.14 -0.62 BLEU points higher than the result of the model ensemble, with an average increase of 0.41 BLEU points.These results show that the encoder combination and decoder combination models are better than the decoding strategy of the model average and model ensemble to a certain extent.In addition, for the encoder combination and decoder combination methods, this word adopted model average and ensemble decoding strategies, respectively, and the results were further improved.It can be seen that the encoder combination method and decoder combination method can be combined with traditional integrated learning combination methods to obtain a more robust machine translation model.

    Table 5 shows an example of system combination.The Chinese word rèdài is out-of-vocabulary (OOV)for Sys1 and Sys2, so the statistical combination model could not correctly translate this word.Although the ensemble model could translate this word well, it does not conform to the required grammar.By combining the merits of Sys1 and Sys2 based on Transformer, the model gets the correct translation.All in all, compared with the best results of statistical combination and neural network combination, the Transformer-based neural system combination method proposed in this paper can effectively integrate the translation results of multiple systems to obtain higher quality translations.

    4 Conclusion

    In this paper, a novel neural network framework based on Transformer was proposed for system combination of machine translation.The neural combination method is not only able to adopt multi-references and multi-source sentences in the combination process, but also addresses the attention mechanism of Transformer to get fluent translations.Furthermore, the approach can use average and ensemble decoding to boost the performance compared with traditional system combination methods.Experiments on Chinese-English data sets showed that the approaches obtained significant improvements over the best individual system and the traditional system combination methods.In future work,more translation results will be combined to improve the system combination quality further.

    国产高清国产精品国产三级 | 亚洲最大成人手机在线| 国产精品一及| 日韩三级伦理在线观看| 中文精品一卡2卡3卡4更新| 亚洲成人中文字幕在线播放| 国产黄片美女视频| 中文字幕免费在线视频6| 国产一区二区亚洲精品在线观看| av网站免费在线观看视频| 亚洲丝袜综合中文字幕| 欧美xxxx性猛交bbbb| 欧美一区二区亚洲| 五月伊人婷婷丁香| 最近中文字幕2019免费版| 在线观看一区二区三区激情| 成年人午夜在线观看视频| 亚洲最大成人av| 免费播放大片免费观看视频在线观看| 我的老师免费观看完整版| 亚洲精品日韩在线中文字幕| 国产精品偷伦视频观看了| 久久精品久久久久久噜噜老黄| 精品一区二区免费观看| 国产一区二区在线观看日韩| 日韩精品有码人妻一区| 久久久久久伊人网av| 日本wwww免费看| 男女下面进入的视频免费午夜| 最新中文字幕久久久久| 26uuu在线亚洲综合色| www.色视频.com| 99热网站在线观看| 自拍欧美九色日韩亚洲蝌蚪91 | 国产精品国产av在线观看| 真实男女啪啪啪动态图| 日本三级黄在线观看| 久久鲁丝午夜福利片| 久久99精品国语久久久| 校园人妻丝袜中文字幕| 成人亚洲精品av一区二区| 久久久久久久久久成人| 日韩中字成人| 久久久久久久精品精品| 精品99又大又爽又粗少妇毛片| 搞女人的毛片| 黄片wwwwww| 日本av手机在线免费观看| 天堂俺去俺来也www色官网| 国产伦精品一区二区三区四那| 身体一侧抽搐| 久久精品综合一区二区三区| 麻豆精品久久久久久蜜桃| 少妇的逼水好多| 1000部很黄的大片| 自拍欧美九色日韩亚洲蝌蚪91 | 免费观看性生交大片5| 久久99热6这里只有精品| 小蜜桃在线观看免费完整版高清| 日本熟妇午夜| 日韩一区二区视频免费看| 国产精品女同一区二区软件| 国产高清三级在线| tube8黄色片| 国产综合懂色| 久久精品熟女亚洲av麻豆精品| 欧美高清成人免费视频www| 欧美变态另类bdsm刘玥| 欧美+日韩+精品| 白带黄色成豆腐渣| 三级男女做爰猛烈吃奶摸视频| 欧美xxⅹ黑人| 国产极品天堂在线| 日本午夜av视频| 亚洲av福利一区| 色视频在线一区二区三区| 天堂俺去俺来也www色官网| 欧美日本视频| 欧美bdsm另类| 亚洲va在线va天堂va国产| 精品亚洲乱码少妇综合久久| 午夜福利网站1000一区二区三区| 色哟哟·www| 国产片特级美女逼逼视频| 99热这里只有是精品50| 成人毛片60女人毛片免费| 中文精品一卡2卡3卡4更新| 好男人在线观看高清免费视频| 亚洲av成人精品一区久久| 亚洲国产高清在线一区二区三| 国产成人91sexporn| 97热精品久久久久久| 国产在线一区二区三区精| 三级男女做爰猛烈吃奶摸视频| 亚洲综合色惰| 亚洲av免费高清在线观看| 精品一区二区三卡| 国产精品无大码| 亚洲精品久久午夜乱码| 国产久久久一区二区三区| 国产黄片美女视频| 亚州av有码| 日本欧美国产在线视频| 精品视频人人做人人爽| 久久99蜜桃精品久久| 国产综合精华液| 国产一级毛片在线| 免费人成在线观看视频色| 国产成人aa在线观看| 亚洲精品国产成人久久av| 国产精品女同一区二区软件| 日本欧美国产在线视频| 久久久久久久国产电影| 国产精品麻豆人妻色哟哟久久| 美女被艹到高潮喷水动态| 熟女av电影| h日本视频在线播放| 欧美区成人在线视频| 日韩一本色道免费dvd| 国产午夜精品久久久久久一区二区三区| 欧美日韩精品成人综合77777| 国产欧美另类精品又又久久亚洲欧美| 美女xxoo啪啪120秒动态图| 最后的刺客免费高清国语| 久久久久国产精品人妻一区二区| 欧美日韩亚洲高清精品| 最近的中文字幕免费完整| 91精品伊人久久大香线蕉| 亚洲国产色片| 秋霞在线观看毛片| 亚洲av在线观看美女高潮| 日韩一区二区视频免费看| 人人妻人人澡人人爽人人夜夜| 国产爱豆传媒在线观看| 草草在线视频免费看| 亚洲一级一片aⅴ在线观看| 久久久久久久精品精品| 免费观看的影片在线观看| 亚洲精品中文字幕在线视频 | 1000部很黄的大片| 丰满乱子伦码专区| 婷婷色av中文字幕| 免费av不卡在线播放| 国产精品一及| 日本黄色片子视频| 九九在线视频观看精品| av在线天堂中文字幕| 国产91av在线免费观看| 免费观看在线日韩| 日韩av免费高清视频| 三级国产精品片| 亚洲国产日韩一区二区| 国产亚洲91精品色在线| 婷婷色av中文字幕| 色网站视频免费| 亚洲自偷自拍三级| 久热久热在线精品观看| 麻豆久久精品国产亚洲av| 国产精品爽爽va在线观看网站| 国产精品久久久久久久久免| 日韩欧美 国产精品| av在线天堂中文字幕| 中文字幕制服av| 国产熟女欧美一区二区| 亚洲av电影在线观看一区二区三区 | 一个人看的www免费观看视频| 亚洲av免费在线观看| 欧美xxxx性猛交bbbb| 亚洲欧美日韩东京热| 丰满人妻一区二区三区视频av| 亚洲国产精品成人久久小说| 大香蕉久久网| 亚洲av免费在线观看| 国产中年淑女户外野战色| 精品少妇黑人巨大在线播放| 国产国拍精品亚洲av在线观看| 韩国高清视频一区二区三区| 肉色欧美久久久久久久蜜桃 | 欧美激情久久久久久爽电影| 国产欧美另类精品又又久久亚洲欧美| 特大巨黑吊av在线直播| 日韩人妻高清精品专区| 97热精品久久久久久| 久久这里有精品视频免费| 亚洲天堂av无毛| 日韩免费高清中文字幕av| 看免费成人av毛片| 最近最新中文字幕免费大全7| 国产美女午夜福利| 在线a可以看的网站| 男女边吃奶边做爰视频| 精品亚洲乱码少妇综合久久| 久久99蜜桃精品久久| 色播亚洲综合网| 国产69精品久久久久777片| 久久久久网色| 亚洲va在线va天堂va国产| 亚洲av不卡在线观看| 一个人看视频在线观看www免费| 22中文网久久字幕| 777米奇影视久久| 精品一区二区三区视频在线| 国产 一区 欧美 日韩| 亚洲色图av天堂| 成人黄色视频免费在线看| 欧美少妇被猛烈插入视频| 精品久久久精品久久久| 亚洲人成网站在线播| 特级一级黄色大片| 午夜日本视频在线| 久久人人爽人人爽人人片va| 免费观看的影片在线观看| 伊人久久国产一区二区| av福利片在线观看| 免费播放大片免费观看视频在线观看| 亚洲av一区综合| 国产精品国产三级国产专区5o| 80岁老熟妇乱子伦牲交| 中文天堂在线官网| 精品午夜福利在线看| 国内精品宾馆在线| 日韩在线高清观看一区二区三区| 五月玫瑰六月丁香| 99热这里只有精品一区| 波野结衣二区三区在线| av国产免费在线观看| 亚洲色图综合在线观看| 欧美 日韩 精品 国产| 天天一区二区日本电影三级| 久久久亚洲精品成人影院| xxx大片免费视频| 久久午夜福利片| 内地一区二区视频在线| 成人一区二区视频在线观看| av在线蜜桃| 各种免费的搞黄视频| 亚洲精品影视一区二区三区av| 国产v大片淫在线免费观看| 国产成人freesex在线| 久久精品人妻少妇| 啦啦啦在线观看免费高清www| 丰满少妇做爰视频| 狠狠精品人妻久久久久久综合| 国产成人一区二区在线| 午夜福利在线在线| 在线观看国产h片| 99热6这里只有精品| 亚洲人成网站在线观看播放| 久久久午夜欧美精品| 色哟哟·www| 久久久午夜欧美精品| 欧美亚洲 丝袜 人妻 在线| 国产成人精品婷婷| 色婷婷久久久亚洲欧美| 在线观看一区二区三区激情| 高清午夜精品一区二区三区| 国产精品av视频在线免费观看| 69av精品久久久久久| 欧美 日韩 精品 国产| 人妻制服诱惑在线中文字幕| 成人黄色视频免费在线看| 国产高清国产精品国产三级 | 99久国产av精品国产电影| 男人添女人高潮全过程视频| av网站免费在线观看视频| 色哟哟·www| 亚洲成人一二三区av| 九九在线视频观看精品| 我要看日韩黄色一级片| 亚洲美女搞黄在线观看| 99精国产麻豆久久婷婷| 国产欧美另类精品又又久久亚洲欧美| 亚洲精品一区蜜桃| 日韩中字成人| 婷婷色av中文字幕| 自拍偷自拍亚洲精品老妇| 在现免费观看毛片| 成人亚洲欧美一区二区av| 精品一区在线观看国产| 久久久久久久亚洲中文字幕| 国产成人91sexporn| 亚洲精品中文字幕在线视频 | 国产欧美日韩精品一区二区| 国产精品国产三级国产av玫瑰| 国产精品蜜桃在线观看| 国产一区二区三区av在线| 在线免费十八禁| 成人亚洲欧美一区二区av| 精品少妇久久久久久888优播| 人妻 亚洲 视频| 七月丁香在线播放| 亚洲国产精品国产精品| 国内揄拍国产精品人妻在线| 亚洲精品456在线播放app| 精品人妻偷拍中文字幕| 下体分泌物呈黄色| 国产精品一区二区性色av| 精品久久久久久久末码| 亚洲欧美日韩东京热| 国产精品爽爽va在线观看网站| 天堂网av新在线| 亚洲av成人精品一二三区| 啦啦啦啦在线视频资源| 欧美亚洲 丝袜 人妻 在线| 在线 av 中文字幕| 永久网站在线| 99久久精品国产国产毛片| 亚洲av二区三区四区| 亚洲精品久久午夜乱码| 精品少妇黑人巨大在线播放| 观看美女的网站| 成人午夜精彩视频在线观看| 国产一区二区亚洲精品在线观看| 国产欧美日韩精品一区二区| 国产黄a三级三级三级人| 乱码一卡2卡4卡精品| 视频区图区小说| 国产精品国产三级国产专区5o| 日韩欧美 国产精品| 久久6这里有精品| 大又大粗又爽又黄少妇毛片口| 成年人午夜在线观看视频| 美女视频免费永久观看网站| 亚洲色图av天堂| 高清日韩中文字幕在线| 最近手机中文字幕大全| 国产免费又黄又爽又色| 2021天堂中文幕一二区在线观| 18禁裸乳无遮挡动漫免费视频 | 国产毛片a区久久久久| 自拍欧美九色日韩亚洲蝌蚪91 | 18禁在线播放成人免费| 国产精品国产三级国产专区5o| 特级一级黄色大片| 婷婷色麻豆天堂久久| 亚洲最大成人av| 九九爱精品视频在线观看| 欧美区成人在线视频| 综合色av麻豆| 男人狂女人下面高潮的视频| 亚洲色图av天堂| 欧美少妇被猛烈插入视频| 亚洲不卡免费看| 婷婷色麻豆天堂久久| 美女cb高潮喷水在线观看| 免费看光身美女| 国内揄拍国产精品人妻在线| 最近的中文字幕免费完整| 国产欧美亚洲国产| www.色视频.com| 国产精品嫩草影院av在线观看| 国产色婷婷99| 久久精品人妻少妇| 久久综合国产亚洲精品| 中文在线观看免费www的网站| 免费观看av网站的网址| 99久久中文字幕三级久久日本| 高清欧美精品videossex| 80岁老熟妇乱子伦牲交| 成年女人看的毛片在线观看| 国产男人的电影天堂91| 亚洲欧美清纯卡通| 80岁老熟妇乱子伦牲交| 日韩欧美精品v在线| 亚洲国产色片| 可以在线观看毛片的网站| 午夜日本视频在线| 日韩强制内射视频| 成人二区视频| av.在线天堂| 色哟哟·www| 蜜桃亚洲精品一区二区三区| 精品国产一区二区三区久久久樱花 | 亚洲欧洲日产国产| 97精品久久久久久久久久精品| 99久久九九国产精品国产免费| 少妇人妻久久综合中文| 男人爽女人下面视频在线观看| 欧美高清成人免费视频www| 国产精品无大码| 久久久久网色| 亚洲在线观看片| 在线观看av片永久免费下载| 97在线视频观看| 男人和女人高潮做爰伦理| 日日摸夜夜添夜夜添av毛片| 在线观看一区二区三区| 插阴视频在线观看视频| 亚洲人成网站在线观看播放| 国产精品成人在线| 一个人观看的视频www高清免费观看| 别揉我奶头 嗯啊视频| 亚洲色图av天堂| 日韩三级伦理在线观看| 下体分泌物呈黄色| 国产综合懂色| av在线蜜桃| 男女国产视频网站| 亚洲精品国产av成人精品| 中国美白少妇内射xxxbb| 少妇人妻久久综合中文| 国产精品嫩草影院av在线观看| 99久久九九国产精品国产免费| 啦啦啦在线观看免费高清www| 韩国av在线不卡| 精品酒店卫生间| 亚洲内射少妇av| 亚洲国产色片| 十八禁网站网址无遮挡 | 大香蕉97超碰在线| 日韩成人伦理影院| 精品国产三级普通话版| 亚洲欧美一区二区三区国产| 最近最新中文字幕大全电影3| 黄片无遮挡物在线观看| 国产欧美另类精品又又久久亚洲欧美| 日本黄色片子视频| 亚洲经典国产精华液单| 在线观看三级黄色| 少妇 在线观看| 岛国毛片在线播放| 国产黄a三级三级三级人| 成人鲁丝片一二三区免费| 精品国产露脸久久av麻豆| 偷拍熟女少妇极品色| 国产久久久一区二区三区| 亚洲av日韩在线播放| 欧美日韩视频高清一区二区三区二| 国产日韩欧美亚洲二区| 国产免费福利视频在线观看| 69人妻影院| 在线观看人妻少妇| 亚洲精品日韩av片在线观看| 久久久久网色| 岛国毛片在线播放| av免费在线看不卡| 建设人人有责人人尽责人人享有的 | 偷拍熟女少妇极品色| 99久久精品一区二区三区| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 免费少妇av软件| 搡女人真爽免费视频火全软件| 韩国高清视频一区二区三区| 亚洲色图av天堂| 免费大片18禁| 大又大粗又爽又黄少妇毛片口| 久久久久久久久大av| 国产精品蜜桃在线观看| 神马国产精品三级电影在线观看| 日本欧美国产在线视频| 内射极品少妇av片p| 亚洲欧洲日产国产| 直男gayav资源| 国产精品国产三级专区第一集| 大香蕉97超碰在线| 亚洲av二区三区四区| 国产综合精华液| 18+在线观看网站| 国产精品人妻久久久影院| 如何舔出高潮| 日韩av不卡免费在线播放| 成人无遮挡网站| 国产欧美亚洲国产| 狂野欧美激情性xxxx在线观看| 在线亚洲精品国产二区图片欧美 | 久久亚洲国产成人精品v| 日韩av免费高清视频| 亚洲av成人精品一区久久| 精品国产三级普通话版| 久久人人爽人人片av| 中文天堂在线官网| 免费观看的影片在线观看| 国产毛片a区久久久久| 我要看日韩黄色一级片| 免费av观看视频| 2018国产大陆天天弄谢| 久久精品国产亚洲av天美| 街头女战士在线观看网站| 毛片女人毛片| 精品酒店卫生间| 在线观看国产h片| 特大巨黑吊av在线直播| av天堂中文字幕网| 在线观看美女被高潮喷水网站| 老师上课跳d突然被开到最大视频| eeuss影院久久| 久久人人爽av亚洲精品天堂 | 免费看日本二区| 精品人妻视频免费看| 乱系列少妇在线播放| 国产精品久久久久久久电影| 大片免费播放器 马上看| 亚洲色图av天堂| 91久久精品国产一区二区成人| 日韩国内少妇激情av| 偷拍熟女少妇极品色| 午夜精品一区二区三区免费看| 亚洲怡红院男人天堂| 日日啪夜夜爽| 少妇丰满av| 美女视频免费永久观看网站| 久久精品人妻少妇| 在线免费十八禁| 欧美日韩综合久久久久久| 大片电影免费在线观看免费| 精品一区二区三卡| 男女边吃奶边做爰视频| 男人添女人高潮全过程视频| 成人国产av品久久久| 久久久久久久久久成人| av一本久久久久| kizo精华| 午夜免费鲁丝| 九草在线视频观看| 一二三四中文在线观看免费高清| 尾随美女入室| 夫妻午夜视频| 高清在线视频一区二区三区| 亚洲欧洲国产日韩| 成年av动漫网址| 久久99热这里只有精品18| 国产91av在线免费观看| 久久久久精品久久久久真实原创| 91精品国产九色| 美女主播在线视频| 国产乱来视频区| 2021天堂中文幕一二区在线观| 久久99热6这里只有精品| 国产精品一区二区性色av| 国内精品宾馆在线| 大陆偷拍与自拍| 性色avwww在线观看| 全区人妻精品视频| 日韩欧美精品免费久久| 黄色日韩在线| 18禁在线无遮挡免费观看视频| 亚洲成色77777| 国模一区二区三区四区视频| 成人高潮视频无遮挡免费网站| 69人妻影院| 特级一级黄色大片| 青青草视频在线视频观看| 日韩一区二区视频免费看| 亚洲国产欧美人成| 久热这里只有精品99| 舔av片在线| 免费看光身美女| 国产成人a∨麻豆精品| 99视频精品全部免费 在线| 精品久久久久久久末码| 国产探花极品一区二区| 在线观看美女被高潮喷水网站| 伊人久久精品亚洲午夜| 亚洲精品国产成人久久av| 日韩一区二区视频免费看| 国产精品三级大全| 中文资源天堂在线| 偷拍熟女少妇极品色| 尾随美女入室| 欧美一区二区亚洲| 五月开心婷婷网| 欧美成人一区二区免费高清观看| 国产男女超爽视频在线观看| 成人无遮挡网站| 亚洲av日韩在线播放| 国模一区二区三区四区视频| 亚洲最大成人中文| 2018国产大陆天天弄谢| 久久精品国产亚洲av天美| 欧美亚洲 丝袜 人妻 在线| 精品少妇久久久久久888优播| 老女人水多毛片| 在线观看一区二区三区| 99热全是精品| 91午夜精品亚洲一区二区三区| 亚洲精品影视一区二区三区av| 久久亚洲国产成人精品v| 久久国产乱子免费精品| 好男人在线观看高清免费视频| h日本视频在线播放| 狂野欧美激情性xxxx在线观看| 老师上课跳d突然被开到最大视频| 哪个播放器可以免费观看大片| 国产老妇伦熟女老妇高清| 免费av不卡在线播放| 欧美日韩视频高清一区二区三区二| 一级爰片在线观看| 菩萨蛮人人尽说江南好唐韦庄| 黄色视频在线播放观看不卡| 大陆偷拍与自拍| 国产成人免费无遮挡视频| 欧美高清成人免费视频www| 成人国产av品久久久| 久久亚洲国产成人精品v| 欧美日韩视频高清一区二区三区二| 99久久九九国产精品国产免费| 国产高清有码在线观看视频| 国产黄片美女视频| 国产在视频线精品| 寂寞人妻少妇视频99o| 国产人妻一区二区三区在| 午夜福利在线在线| 极品教师在线视频| 亚洲伊人久久精品综合| 国产色婷婷99| 搡老乐熟女国产| 黄色一级大片看看| av卡一久久| av播播在线观看一区| 国产一区有黄有色的免费视频| 天美传媒精品一区二区| av网站免费在线观看视频| 伊人久久国产一区二区| 视频中文字幕在线观看| 国内少妇人妻偷人精品xxx网站|