• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning*

    2024-03-06 09:17:10YuxinHUANGHuailingGUZhengtaoYUYumengGAOTongPANJialongXU

    Yuxin HUANG, Huailing GU, Zhengtao YU?, Yumeng GAO, Tong PAN, Jialong XU

    1Faculty of Information Engineering and Automation,Kunming University of Science and Technology, Kunming 650504, China

    2Yunnan Key Laboratory of Artificial Intelligence,Kunming University of Science and Technology, Kunming 650504, China

    ?E-mail: huangyuxin2004@163.com; ztyu@hotmail.com

    Received Apr.27, 2023; Revision accepted Oct.22, 2023; Crosschecked Nov.3, 2023; Published online Dec.27, 2023

    Abstract: Cross-lingual summarization (CLS) is the task of generating a summary in a target language from a document in a source language.Recently, end-to-end CLS models have achieved impressive results using large-scale,high-quality datasets typically constructed by translating monolingual summary corpora into CLS corpora.However,due to the limited performance of low-resource language translation models, translation noise can seriously degrade the performance of these models.In this paper,we propose a fine-grained reinforcement learning approach to address low-resource CLS based on noisy data.We introduce the source language summary as a gold signal to alleviate the impact of the translated noisy target summary.Specifically, we design a reinforcement reward by calculating the word correlation and word missing degree between the source language summary and the generated target language summary, and combine it with cross-entropy loss to optimize the CLS model.To validate the performance of our proposed model, we construct Chinese-Vietnamese and Vietnamese-Chinese CLS datasets.Experimental results show that our proposed model outperforms the baselines in terms of both the ROUGE score and BERTScore.

    Key words: Cross-lingual summarization; Low-resource language; Noisy data; Fine-grained reinforcement learning; Word correlation; Word missing degree

    1 Introduction

    Cross-lingual summarization (CLS) is the task of automatically generating a short target language summary based on the source language’s long text, which can be regarded as a cross-lingual text generation task.In recent years, data-driven sequence-to-sequence models have achieved considerable performance in cross-lingual generation tasks such as machine translation (Rippeth and Post,2022),cross-lingual dialogue generation(Kim et al.,2021; Zhou et al., 2023), and video summarization(Li P et al., 2021; Javed and Ali Khan, 2022),and their performance is derived mainly from largescale, high-quality training data.However, due to the paucity of data,the performance of cross-lingual text generation tasks for low-resource languages is unsatisfactory.

    The main challenge encountered in CLS tasks is the construction of extensive and high-quality CLS datasets.Currently, a large number of monolingual summarization datasets have been constructed for rich-resource languages such as Chinese and English.For instance, Hu et al.(2015) compiled a dataset named LCSTS,which includes Weibo posts as source texts and Weibo headlines as summaries.On the other hand, the Cable News Network/Daily Mail corpus primarily consists of news articles from the American Cable News Network and the Daily Mail(Hermann et al.,2015).However,acquiring and constructing CLS datasets through direct means remains a highly challenging task.The mainstream method is to use machine translation to translate the source language text or summary of a monolingual summary into the target language.For example,Zhu et al.(2019)used the round-trip translation(RTT)back-translation strategy to construct a CLS dataset.To ensure high-quality summaries,they applied several filtering criteria based on ROUGE(Lin,2004) score, including length consistency, sentence fluency, and meaning preservation.This resulted in a high-quality parallel corpus suitable for CLS research.However, the construction process of CLS datasets based on translation heavily relies on the performance of machine translation models.However, for low-resource languages such as Chinese-Vietnamese,machine translation performance is unsatisfactory,resulting in a significant amount of noise during the data construction process.In particular,the model’s ability to accurately generate summaries is greatly affected when the target language summary,which is used as the reference during training,contains translation errors.

    According to statistics, there are about 50% of data with problems such as missing content words and improper word selection, as shown in Fig.1.In Fig.1a, the word “ph?n h?i (respond)” has been inaccurately translated as “處理(handle),” while in Fig.1b the content word “ví (wallet)” has been left untranslated.Employing such imprecise and incomplete pseudo-summaries as supervisory signals could potentially misguide the model.Drawing from this insight,we propose to introduce the source language summary and align it with the generated target language summary to assess the adequacy of the generated summaries in terms of word omission and accuracy.

    Based on the aforementioned analysis and leveraging the alignment information between the source language summary and the generated target language summary,we propose a fine-grained reinforcement learning based CLS approach to mitigate the errors caused by improper word selection and missing content words, which are prevalent in pseudo-target summaries and can lead to misguided model training.To address the issue of improper word selection during the decoding process, we design a reinforcement learning reward by the word correlations between the source language summary and the generated target summary.To tackle missing content words in the generated summary, we penalize the decoder based on the importance of the missing words relative to the source language summary.

    We propose a fine-grained reinforcement learning reward that incorporates word correlation and missing degree between the source language summary and the generated target language summary.We combine this reward with the traditional crossentropy loss to optimize the model, thus providing more effective guidance for generating the target language summary.

    We conduct experiments on the Transformer framework using the Chinese-Vietnamese and Vietnamese-Chinese CLS datasets.The results show that our method achieves significant improvements compared with previous methods.Additionally,reinforcement learning rewards based on a combination of word correlation and missing degree can help generate a better summary.Our main contributions are as follows:

    1.We model the relationship between the source language summary and target language summary from a fine-grained perspective, alleviating error guidance caused by noisy data in CLS tasks.

    2.The experimental results show that this method achieves significant improvements compared with previous methods on the Chinese-Vietnamese and Vietnamese-Chinese CLS datasets.

    2 Related works

    2.1 Cross-lingual summarization

    CLS is the task of generating a summary in a target language from a document in a source language.Traditional CLS tasks usually adopt a technical framework of translating first and then summarizing (Leuski et al., 2003; Ouyang et al., 2019) or summarizing first and then translating (Lim et al.,2004;Orˇasan and Chiorean,2008).However,they are often affected by the error propagation of translation and summarization models, and the results are not satisfactory in low-resource languages.Neural network based CLS tasks(Jiang et al.,2022;Wang et al.,2022)are usually seen as similar to machine translation tasks, but the difference is that machine translation maintains the same amount of information in its input and output,whereas CLS requires compression and translation of information.There are typically two types of methods for low-resource language CLS tasks.The first type is summary methods based on zero-shot learning.Ayana et al.(2018)addressed the problem of the lack of source language to target language summary datasets by using a pre-trained machine translation model and a headline generation model as teacher networks to guide the learning of the cross-lingual headline generation model.This approach enables the model to have translation and summarization abilities and to generate cross-lingual summaries under zero-shot conditions.Nguyen and Luu (2022) employed a monolingual summarization model as the guiding teacher network to facilitate parameter learning in the CLS model.The second type of method is based on a multi-task joint learning approach that combines machine translation and summarization models to address the problem of sparse training data (Takase and Okazaki, 2020;Liang et al.,2022).Zhu et al.(2019)proposed an endto-end CLS model based on the Transformer text generation framework.They jointly trained the CLS and monolingual summarization and the CLS and machine translation tasks with parameter sharing at the encoding stage.During the training process,the two tasks were alternately trained to have the ability to generate cross-lingual summaries.Cao et al.(2020)used generative adversarial networks to align the contextual representations of two monolingual summarization models in the source and target languages,achieving bilingual alignment while performing monolingual summarization.Bai et al.(2021)argued that although joint learning of CLS and machine translation can enhance CLS performance by sharing encoder parameters,the decoders of the two tasks are independent of each other and cannot establish good alignment between CLS and machine translation tasks.Based on the analysis,most of the aforementioned works are based on machine translation to construct pseudo parallel CLS datasets from monolingual datasets, targeting languages with rich resources such as Chinese and English, where machine translation performs well and has fewer errors.However,for low-resource languages,the translation performance is unsatisfactory,and constructing CLS datasets through translation can introduce a large amount of data noise.Effective analysis and processing methods for CLS in low-resource and noisy data scenarios are still lacking.

    2.2 Reinforcement learning

    Reinforcement learning has been widely used in many tasks (Zhao J et al., 2022; Li HQ et al., 2023;Xiong et al.,2023),such as machine translation and text summarization,mainly through global decoding optimization to alleviate the exposure bias problem(Kumar et al.,2019;You et al.,2019).In the summarization task,Paulus et al.(2017)used the ROUGE value between the real summary and the generated summary as a reinforcement learning reward to reward or punish the model and to combine this reward with cross-entropy using linear interpolation as the training objective function,which partially alleviates the exposure bias problem.According to B¨ohm et al.(2019),the correlation between ROUGE and human evaluation is weak when it comes to summaries that have a diverse vocabulary, which suggests that using ROUGE as a reinforcement learning reward may not be reliable.They used the source text and the generated summary as the input to learn a reward function from human-scored summaries, achieving better results than thoses using ROUGE as a reward.Yoon et al.(2021) calculated the semantic similarity between the generated summary and the reference summary based on a language model as a reinforcement learning reward, improving the reward acquisition method of word-level matching ROUGE.For CLS tasks,Dou et al.(2020)used the similarity between the source language summary and the generated target language summary as a reinforcement learning reward to constrain the model to generate better summaries.Inspired by this study,we believe that by better modeling the correlation between the source language summary and the generated summary,we can effectively use the noise-free source language summary to alleviate the noise problem caused by the translation.

    3 Fine-grained reinforcement learning for low-resource cross-language summarization

    To address the issue of noise in the supervision signal in low-resource cross-language summarization,we propose a fine-grained reinforcement learning method for cross-language summarization based on the Transformer model (Vaswani et al., 2017).To improve the quality of generated summaries and mitigate the impact of noise in pseudo-target language summaries, we design reinforcement learning rewards based on the word correlation and missing degree between the source language summary and the generated target language summary.The reinforcement learning function is then combined with the maximum likelihood estimation function as the training objective to optimize the generated summary.The model structure is shown in Fig.2.

    3.1 Model

    In traditional CLS models based on the Transformer architecture, given a training set{XA,?Y B},whereArepresents the source language andBrepresents the target language, each documentXAis mapped to a high-dimensional vector to obtain an input document sequenceXA={x1,x2,...,xN},which is then encoded by the encoder to obtain a vector representationH={h1,h2,...,hN}of the document sequence (Nis the length of sequenceXA).Finally, the decoder generates a summary sequenceY B={y1,y2,...,yM}based on the givenH.During this process, the maximum likelihood estimate between the generated summaryY Band the reference summary ?Y Bis used as the optimization objective,and the cross-entropy loss function is defined as follows:

    Fig.2 Fine-grained reinforcement learning model structure to improve the quality of model-generated summary by computing word correlation and missingness between decoder-generated summary and source language summary (Words in red represent the updated words.References to color refer to the online version of this figure)

    whereMis the length of summarizationY B.

    3.2 Reinforcement learning loss based on word correlation and word missing degree

    Given the training set{XA,?Y B}obtained by translating a monolingual summary dataset, we investigate the noise types that account for a significant proportion of errors in ?Y Bdata, namely, improper word selection and missing content words(detailed analysis is given in Section 4.1.2).Therefore,we introduce the source language summaryY Aas a reference and design a reinforcement learning reward by calculating the word correlation and word missing degree between the source language summaryY Aand the generated target language summaryY Bto weaken the error guidance caused by the pseudotarget language summary as the supervision signal.

    In the CLS model, we consider the model as an agent,with the context representation vector obtained at each decoding steptand the summaryyB<tgenerated in the previous time stept-1 being perceived as the environment.During the process of summary generation, the agent needs to choose a word from the candidate word list as the summary word for the current time stept; this selection process constitutes an action of the agent.Upon completing a summary generation, the model receives a rewardR(Y B,Y A).The reward function calculation process is shown in Algorithm 1.By assigning higher scores, the model is encouraged to generate better summaries.We use Eq.(4)to calculate the expected reward:

    whereYrepresents all the possibly generated candidate summaries.An exponential search space is constituted in the summary generation process.

    Algorithm 1 Reward function design 1: Input: Yidf, Y A→B align /* Input TF-IDF values of source language summary words and correlation table */2: score ←0 /* Total reward score */3: sumwd ←0 /* Total word missingness penalty score */4: Countcor ←0 /* Count the number of times yBj,sim being greater than 0 */5: sumcor ←0 /* Total word correlation score */6: for yAj,idf in Y Aidf do 7: yBj,sim ←sim(yAj ,Y A→Balign )sum 8: if yBj,sim =0 then 9: sumwd ←sumwd -yAj,idf 10: else 11: scorecor ←yBj,sim·yAj,idf 12: score ←score+scorecor 13: sumcor ←sumcor +yBj,sim 14: Countcor ←Countcor +1 15: end if 16: end for 17: if Countcor/=0 then 18: avgcor ← sumcor Countcor 19: score ←score+sumwd ·avgcor 20: end if 21: return score

    In practice,a sequenceYSis often sampled from the probability distribution functionP(Y B|XA;θ)to optimize the expected reward; however, this can result in high variance.To address this issue, we adopt the same method as in previous research(Rennie et al., 2017; Kang et al., 2020) and introduce a baseline reward to reduce gradient variance.We use the self-critical policy gradient training algorithm in the training of the reinforcement learning objective,which involves two summary-generating strategies:one whereYSis randomly sampled from the conditional probability functionP(Y B|XA;θ) and the other whereYGis generated by the greedy decoding.The training objective of a summary sentence in reinforcement learning is as follows:

    4 Experiments

    4.1 Data analysis

    4.1.1 Data construction

    We constructed two types of CLS datasets:Chinese-Vietnamese and Vietnamese-Chinese.For Chinese-Vietnamese summarization, we used the first 200 000 data samples of LCSTS (Hu et al., 2015) for back-translation to obtain the Chinese-Vietnamese CLS dataset (Zh-Visum).For Vietnamese-Chinese summarization, we crawled Vietnamese monolingual datasets from various news websites, including Vietnam+, Vietnam News Agency, Vietnam Express, and Vietnam News Agency.The collected data were then cleaned and back-translated to obtain 115 798 samples of the Vietnamese-Chinese CLS dataset(Vi-Zhsum),where the translation was performed by YunLing translation (http://yuntrans.vip).We used ROUGE (Lin,2004), BERTScore (Zhang et al., 2020), and MGFScore (Lai et al., 2022) to filter the back-translated data.Taking Zh-Visum as an example, the specific workflow is shown in Fig.3.

    Fig.3 Zh-Visum filtering flowchart

    Finally, the hybrid objective function for training the CLS model is a linear interpolation of the crossentropy loss function and the reinforcement learning training objective function, expressed as

    whereγis the scale factor between the cross-entropy loss function and the reinforcement learning training objective function.

    In Zh-Visum, the lowest 50% of the filtered data were removed, leaving 100 000 data samples;due to the relatively high quality of Vi-Zhsum backtranslation,the lowest 30%of the data were filtered,leaving 81 000 data samples.Detailed information on back-translation scores before and after data filtering is shown in Table 1.

    4.1.2 Noise analysis

    From Table 1, it can be seen that the filtering of the back-translated data effectively improves the quality of Zh-Visum and Vi-Zhsum.However,further analysis of Zh-Visum and Vi-Zhsum revealed that only a small proportion of sentences were completely correct, and that high-quality and large-scale data were needed for training the model.We further analyzed the noise in the cross-lingual data constructed by machine translation according to the types of noise defined in the literature(Zhao H et al., 2013).

    We randomly selected 100 pairs of source and target language summaries from the Chinese-Vietnamese and Vietnamese-Chinese CLS datasets and manually marked the noise types in the unfiltered and filtered data using three different filtering methods:ROUGE,BERTScore,and MGFScore.Table 2 shows the noise type statistics.Additionally,50%of the Zh-Visum data and 30%of the Vi-Zhsum data have been filtered.

    We can have the following conclusions from Table 2:

    1.The proportion of error-free sentences in the constructed cross-lingual summary dataset is relatively small.Even though filtering by evaluation metrics can improve the data accuracy, it cannot avoid noisy data.Therefore,after improving the quality of the dataset, further research is needed on methods for cross-lingual summary generation under noise.

    2.In both Vi-Zhsum and Zh-Visum, the types of noise with the first and second highest proportions are inappropriate word selection and missing content words.In Zh-Visum, the data are obtained through back-translation from LCSTS, a short text summary dataset obtained from Weibo using headlines as summaries.The Chinese words used in these summaries are often concise and to the point, making it easy for machine translation to deviate from the correct understanding and ignore some content words.In Vi-Zhsum, errors in word order are also common as Vi-Zhsum is translated from longer texts,and machine translation tends to have weaker comprehension of the logical sequence between words in longer texts, making it prone to mistakes in word order.This type of noise has a weaker impact on the quality of the generated sentences.

    In summary, in both Vi-Zhsum and Zh-Visum,the types of noise with the first and second highest proportions are improper word selection and missing content words.Therefore,it is necessary to use data filtering to improve the quality of pseudo-data and continue to weaken the noise.

    To verify the effectiveness of the proposed model, we constructed a Chinese-Vietnamese CLS dataset Zh-VisumFilterwith 100 000 samples and a Vietnamese-Chinese CLS dataset Vi-ZhsumFilterwith 81 000 samples, using the method described in Section 3.1.The detailed data are shown in Table 3,where BERT represents using BERTScore to filter Zh-Visum data, MGF represents using MGFScoreto filter Vi-Zhsum data, and RG represents using the ROUGE metric to filter the data.Regardless of the filtering and training method used, the test sets are the same for the same language.

    Table 1 Back-translation score for Zh-Visum and Vi-Zhsum

    Table 2 The proportion of data of different noise types in Zh-Visum and Vi-Zhsum

    4.2 Evaluation metrics

    In this study, the quality of summaries generated by the CLS model was evaluated using two evaluation metrics.As with most summarization works,one is based on a statistical method called ROUGE(Lin, 2004), which calculates the co-occurrence degree ofN-grams between the standard summary and the generated summary,and the formula is as follows:

    whereGis the generated summary, Ref is the reference summary, gramNis theN-gram phrase,Countmatch(gramN)is the number of overlappingNgrams in the generated summary and the reference summary, and Count(gramN) is the number ofNgrams in the reference summary.Nis usually set to 1, 2, andL(Lis the length of the longest common subsequence).In this study, ROUGE-1, ROUGE-2,and ROUGE-Lare used to evaluate the quality of the generated summary,denoted as RG-1,RG-2,and RG-L,respectively.

    The other evaluation method for measuring the quality of generated summaries in a CLS model is based on deep semantic matching, as proposed by Zhang et al.(2020).This method is called BERTScore and uses a pre-trained language model to calculate the semantic similarity between the generated and reference summaries.Nowadays, this method is widely used to evaluate the quality of generated summaries.In Chinese,the pre-trained model used for scoring is“bert-base-Chinese,”while in Vietnamese the pre-trained model used is “bert-basemultilingual-cased.” In the case of using BERTScore for evaluation,the “<unk>” tokens in the generated summaries are replaced with the“[unk]” tokens from the BERT vocabulary.

    4.3 Experiment setup

    The model code was implemented using the Py-Torch framework.The Transformer encoder and decoder were both set to six layers,with eight attention heads and a hidden vector dimension of 512.The feedforward neural network was configured with a size of 1024.The model used a teacher-forcing strategy,with a label smoothing set to 0.1 and a dropout set to 0.1.The model was trained with the warmup phase of 3000 steps and accumulated a gradient every two steps.During decoding,a beam search strategy with a beam size of five was used.It is worth noting that similar to Wu et al.(2019) and Unanue et al.(2021), for models with reinforcement learning strategies, we used unfiltered Zh-Visum and Vi-Zhsum data for parameter initialization and then trained the model using filtered data.

    4.4 Baselines

    To verify the effectiveness of the proposed finegrained reinforcement learning approach for Chinese-Vietnamese CLS, we trained and compared the following baseline models on the Zh-VisumFilterand Vi-ZhsumFilterdatasets:

    1.Sum-Tra:It is a traditional CLS method that generates a summary in the source language first and then translates it into the target language.

    2.Tra-Sum: Similar to Sum-Tra, it is a twostep CLS method that translates the source languagedocument into the target language and then generates a summary in the target language.In Sum-Tra and Tra-Sum,YunLing translation is used as the machine translation model,and an unsupervised extractive method called LexRank (Takase and Okazaki,2020)is used as the summarization model.

    Table 3 Experimental data details

    3.NCLS(Zhu et al.,2019):This is an end-to-end neural network CLS model based on the Transformer framework.It incorporates two related tasks,monolingual summarization and machine translation, to further improve model performance.

    4.MCLAS(Bai et al., 2021):This CLS method is based on the multi-task framework, which sequentially performs monolingual summarization and CLS, using BERT (mBERT) to initialize the Transformer encoder.

    5.KDCLS(Nguyen and Luu,2022):It is a novel knowledge distillation based framework for CLS,seeking to explicitly construct cross-lingual correlation by distilling the knowledge of the monolingual summarization teacher into the CLS student.

    6.LR-ROUGE(Yoon et al.,2021):This method is similar to the proposed method, but it uses RG-Lto calculate the expected reward.

    7.XSIM (Dou et al., 2020): This method employs reinforcement learning to directly enhance a bilingual semantic similarity metric between the summaries generated in a target language and the gold summaries in a source language.

    8.LR-MC: The proposed CLS model combines cross-entropy and reinforcement learning as the optimization objective.The expected reward is calculated based on the word missing degree and word correlation between the source language summary and the generated target language summary.

    4.5 Analysis of experimental results

    We designed experiments from different perspectives to verify the effectiveness of the Chinese-Vietnamese and Vietnamese-Chinese CLS method based on fine-grained reinforcement learning under noisy data.

    First, the effects of fine-grained reinforcement learning proposed were compared with those of baseline models.Then, the improvements of word correlation reward and word missing penalty based on reinforcement learning designed under noise were explored, and the respective impacts of these parts on the model were analyzed.Second, the influence of the proportion factor between the cross-entropy loss function and the reinforcement learning training objective function on model performance was investigated.Next, the neural network model was trained using data before and after noise filtering, and the performances of the model under different data were compared.Finally, a case study was conducted for the summaries generated by different models.

    4.5.1 Experimental results

    The results of comparison between the proposed model and the baselines are shown in Table 4.Here,γrepresents the scale factor between cross-entropy loss and the expected reward;γ= 1 means that no reinforcement learning reward is added.

    From Table 4, it can be seen that the proposed method performed the best on both the Chinese-Vietnamese and Vietnamese-Chinese crosslanguage summarization tasks (achieving the best performance withγset to 0.6).The LR-MC model trained on noisy data and then fine-tuned on real data showed further improvement in model performance.LR-MC showed a larger improvement in RG-2 metric, which may be due to the higher quality and better coherence of pseudo-summary texts in real data.When compared with XSIM, the LR-MC model outperformed in both Chinese-Vietnamese and Vietnamese-Chinese cross-language summarization tasks.Additionally, in comparison to KDCLS,LR-MC achieved higher summary quality, particularly exhibiting significant improvements in the Chinese-Vietnamese cross-language summarization task.This outcome can be attributed to the significant linguistic disparities between these two languages, suggesting that the guidance provided by monolingual summarization or translation may not effectively support low-resource CLS.

    Compared with directly using the cross-entropy loss function to optimize the model,adding the finegrained expected reward proposed can effectively weaken the noise.Under the Zh-Visum data, RG-1 showed an improvement of 2.59%, RG-2 showed an improvement of 4.19%, RG-Lshowed an improvement of 3.50%,and BERTScore showed an improvement of 0.30%.Similarly, under the Vi-Zhsum data,RG-1 demonstrated an improvement of 2.78%,RG-2 showed an improvement of 1.97%, RG-Lshowed an improvement of 1.87%, and BERTScore showed an improvement of 0.34%.Using the reward expectation calculated by the word correlation and word missing degree between the real source language summary and the generated target summary can further improve the model performance, compared with the method of using ROUGE as the reward expectation and the cross-entropy loss function in RG-L.This shows that the fine-grained reinforcement learning method proposed has good performance in both Chinese-Vietnamese and Vietnamese-Chinese CLS tasks, as well as in noisy data with short or long texts.It can weaken the impact of noise brought by pseudo-target language summaries to some extent.

    4.5.2 Ablation experiment

    To verify the effect of the reinforcement learning reward based on word correlation and word missing degree on the performance of the model, two single modules were used in the ablation experiment, and the results are shown in Table 5.

    According to Table 5, both the word correlation and word missing degree between the source language summary and the generated target summary were helpful in improving the model performance.When calculating only the word missing degree between the source language summary and the generated target language summary as the expected reward(LRmis),the performance decrease was more significant, and when calculating only the word correlation as the expected reward(LRcor), the performance decrease was relatively small.We believe that this is due to two reasons.First, when using only the word missing degree, the information obtained by the model is relatively limited.Second, the word missing degree is designed for the noise type of missing content words,and from the analysis of the noisy data, it can be seen that the proportion of missing content words is smaller than that of improper word selection.

    4.5.3γparameter experiment

    From Table 6, it can be seen that the model performed the best whenγwas set to 0.6.Asγdecreased, the proportion of reward expectation increased, and the model performance did not reach its optimal level.Based on the experimental results on the decoding of the test set data, it was found that the increase in the proportion of reward expectation resulted in a higher proportion of out-ofvocabulary words in the decoded summary, which was the main reason for the decrease in summary generation quality.

    We believe that using reinforcement learning reward as the optimization objective function, the word-level reward based on the source languagecontains more word-level information, but does not include the logical relationships or sequence features between words in the target language.Compared with the Chinese-Vietnamese CLS of short texts,in the Vietnamese-Chinese CLS of long texts, the proportions of word order and logical relationships between words are larger.This is also the reason why the performance of the model decreases more quickly when increasing the proportion of expected reward in the Vietnamese-Chinese CLS.Therefore,even though the word-level reward based on the source language summary designed in this study has an encouraging effect in reducing noise, it is not recommended to use this reward alone to train the model.Using the expected reward in combination with cross-entropy loss can better learn the word order information between target language words while reducing noise.

    Table 4 Results of comparison with baseline models

    Table 5 Ablation experiment results

    4.5.4 Exploring the effect of noisy data on model performance

    To fully investigate the impact of noisy data on neural network models,we conducted comparison experiments using the basic Transformer framework.The specific results are shown in Table 7.

    Table 7 shows that neural network models were sensitive to noisy data, and that filtering out noisy data was more conducive to model learning when using the same amount of data for training.In the Chinese-Vietnamese cross-language summarization dataset,the data noise was relatively high,and training the model with the top 100 000 high-quality data samples was still more conducive to generating readable summaries than training with unfiltered 200 000 data samples.In comparison,the Vi-Zhsum dataset had a smaller size but relatively high quality.Training the model by filtering out the top 30% of the data resulted in slightly worse performance in RG-2 and RG-Lmetrics than using all data for training,but the noise had a negative impact regardless of the dataset.Therefore, starting from noisy data, it is necessary to study cross-language summarization between Chinese and Vietnamese.

    4.5.5 Case analysis

    Table 8 presents examples of summaries generated by different summarization models, using the Vi-Zhsum task as an example.From Table 8, it can be seen that the method proposed generated summaries with the highest quality among several summarization models.The base model trained on unfiltered data(Transformer-all)generated less summary information.After further training the model using high-quality data, all models attempted to generatemore informative summaries,but only the Vi-Zhsum fine-grained reinforcement learning summarization model proposed generated the key information: “So far, Vietnam has recorded <unk>patients.”

    Table 6 Experimental results at different γ values

    Table 7 Experimental results under different noisy data

    5 Conclusions

    In this paper we analyze and study the noise problem in Chinese-Vietnamese cross-language summarization and propose a fine-grained reinforcement learning cross-language summarization method for the existence of two types of noise in pseudo-target summarization: improper word selection and lack of content words.Using the real source summary and the generated summary as a benchmark,the method calculates the expected reward based on the word correlation and missingness between the source and generated summaries to weaken noise.The traditional cross-entropy loss between the pseudo-target language summary and the generated summary is also retained to learn the word order relationship between the target languages.The combination of reinforcement learning loss and cross-entropy loss is used as the optimization objective for model training,reducing the negative impact of noisy data when directly using pseudo-target language summaries to train the model and enhancing the quality of generated summaries.In addition,the experiments explore the impact of noisy data on neural network models,and the results show that high-quality data are more conducive to model training.

    Contributors

    Yuxin HUANG designed the research.Yumeng GAO processed the data.Huailing GU drafted the paper.Tong PAN and Zhengtao YU helped organize the paper.Huailing GU and Jialong XU revised and finalized the paper.

    Compliance with ethics guidelines

    Yuxin HUANG, Huailing GU, Zhengtao YU, Yumeng GAO, Tong PAN, and Jialong XU declare that they have no conflict of interest.

    Data availability

    Due to the nature of this research, participants of this study did not agree for their data to be shared publicly, so the supporting data are not available.

    一级黄色大片毛片| 俄罗斯特黄特色一大片| 中文字幕av在线有码专区| 在线观看日韩欧美| 欧美日韩福利视频一区二区| 久久伊人香网站| 国产精品电影一区二区三区| 国产精品99久久99久久久不卡| 我的老师免费观看完整版| 99国产极品粉嫩在线观看| 久久亚洲精品不卡| 91av网站免费观看| 熟女人妻精品中文字幕| 国产又黄又爽又无遮挡在线| 免费无遮挡裸体视频| 男人舔奶头视频| 国产免费av片在线观看野外av| 国产午夜精品久久久久久| 亚洲欧美一区二区三区黑人| av片东京热男人的天堂| 免费一级毛片在线播放高清视频| 亚洲一区二区三区色噜噜| 男女下面进入的视频免费午夜| 欧美日韩中文字幕国产精品一区二区三区| 久久久色成人| 亚洲天堂国产精品一区在线| av天堂中文字幕网| 热99在线观看视频| 51午夜福利影视在线观看| 熟女人妻精品中文字幕| 日日摸夜夜添夜夜添小说| 国产精品一区二区三区四区免费观看 | 国产一区在线观看成人免费| 又黄又爽又免费观看的视频| 男女午夜视频在线观看| 久久久国产精品麻豆| 俺也久久电影网| 亚洲欧美精品综合久久99| 午夜精品一区二区三区免费看| h日本视频在线播放| 亚洲欧美日韩无卡精品| 午夜福利免费观看在线| 狂野欧美白嫩少妇大欣赏| 夜夜躁狠狠躁天天躁| 在线a可以看的网站| 精华霜和精华液先用哪个| 亚洲av日韩精品久久久久久密| 禁无遮挡网站| 最近最新中文字幕大全电影3| 国内精品一区二区在线观看| 亚洲国产精品sss在线观看| 亚洲熟妇熟女久久| 国产精品美女特级片免费视频播放器 | 两个人的视频大全免费| 亚洲熟妇熟女久久| 美女 人体艺术 gogo| 欧美最黄视频在线播放免费| 日韩人妻高清精品专区| 国产精品自产拍在线观看55亚洲| 成年免费大片在线观看| 757午夜福利合集在线观看| 亚洲美女黄片视频| 亚洲av第一区精品v没综合| 久久久久性生活片| 无限看片的www在线观看| 亚洲精品在线观看二区| 91av网站免费观看| 国产激情久久老熟女| www日本在线高清视频| 午夜影院日韩av| 国产一区二区三区在线臀色熟女| 首页视频小说图片口味搜索| 亚洲午夜精品一区,二区,三区| 丁香欧美五月| 国产精品 国内视频| 国产成人啪精品午夜网站| 欧美日韩瑟瑟在线播放| 国产免费av片在线观看野外av| 久久亚洲真实| 久久久国产成人精品二区| 欧美日本视频| 搡老岳熟女国产| 国产精品女同一区二区软件 | 男人舔女人下体高潮全视频| a级毛片在线看网站| 国产成人啪精品午夜网站| 757午夜福利合集在线观看| 少妇人妻一区二区三区视频| 免费搜索国产男女视频| 伦理电影免费视频| 999久久久精品免费观看国产| 国产91精品成人一区二区三区| 嫩草影院入口| 九色国产91popny在线| 三级男女做爰猛烈吃奶摸视频| av欧美777| 欧美zozozo另类| 人妻久久中文字幕网| 欧美乱色亚洲激情| 一区福利在线观看| 俺也久久电影网| 国产人伦9x9x在线观看| 999精品在线视频| 日本精品一区二区三区蜜桃| 亚洲人成电影免费在线| 香蕉久久夜色| 一a级毛片在线观看| а√天堂www在线а√下载| 首页视频小说图片口味搜索| 日韩精品中文字幕看吧| 91麻豆精品激情在线观看国产| 欧美xxxx黑人xx丫x性爽| 亚洲av电影不卡..在线观看| 一级黄色大片毛片| 久久精品夜夜夜夜夜久久蜜豆| 欧美成人性av电影在线观看| 精品久久久久久久久久久久久| 精品福利观看| 色吧在线观看| 99在线视频只有这里精品首页| 亚洲在线观看片| 老汉色∧v一级毛片| 欧美黄色片欧美黄色片| 午夜激情福利司机影院| 日韩国内少妇激情av| 亚洲美女黄片视频| 欧美成人一区二区免费高清观看 | 国产精品一区二区三区四区久久| 热99re8久久精品国产| 免费一级毛片在线播放高清视频| 亚洲国产精品sss在线观看| 午夜亚洲福利在线播放| 在线播放国产精品三级| 99久久久亚洲精品蜜臀av| 波多野结衣高清作品| 后天国语完整版免费观看| 怎么达到女性高潮| 亚洲美女黄片视频| 久久天躁狠狠躁夜夜2o2o| 欧洲精品卡2卡3卡4卡5卡区| 999精品在线视频| 国产三级黄色录像| 欧美3d第一页| 国产视频一区二区在线看| 午夜免费观看网址| 黄色片一级片一级黄色片| 99久久99久久久精品蜜桃| 99国产极品粉嫩在线观看| 久久久久久久午夜电影| 精品久久蜜臀av无| 欧美日韩综合久久久久久 | av片东京热男人的天堂| 欧美日韩精品网址| 亚洲熟妇中文字幕五十中出| 国内精品久久久久久久电影| 麻豆成人午夜福利视频| 成人特级av手机在线观看| 日韩欧美一区二区三区在线观看| 亚洲专区中文字幕在线| 亚洲av五月六月丁香网| 午夜久久久久精精品| 在线看三级毛片| tocl精华| 欧美成人一区二区免费高清观看 | 国产又黄又爽又无遮挡在线| 日本黄大片高清| 一二三四社区在线视频社区8| 后天国语完整版免费观看| 国产欧美日韩精品一区二区| 最好的美女福利视频网| 亚洲五月天丁香| 午夜久久久久精精品| 亚洲 欧美 日韩 在线 免费| 露出奶头的视频| x7x7x7水蜜桃| 欧美不卡视频在线免费观看| 18禁国产床啪视频网站| 午夜激情欧美在线| 国产精品98久久久久久宅男小说| АⅤ资源中文在线天堂| 欧美3d第一页| 999久久久精品免费观看国产| 19禁男女啪啪无遮挡网站| 很黄的视频免费| 久久精品国产综合久久久| 精品人妻1区二区| 狠狠狠狠99中文字幕| 日本在线视频免费播放| 999久久久国产精品视频| 亚洲性夜色夜夜综合| 欧美3d第一页| 国产精品久久久久久久电影 | 国产淫片久久久久久久久 | 热99re8久久精品国产| 一级毛片高清免费大全| 久久精品夜夜夜夜夜久久蜜豆| 丝袜人妻中文字幕| 黑人操中国人逼视频| 国产不卡一卡二| 午夜福利视频1000在线观看| 韩国av一区二区三区四区| 国产真人三级小视频在线观看| 又大又爽又粗| 国产免费男女视频| 一二三四社区在线视频社区8| 在线看三级毛片| 免费无遮挡裸体视频| 深夜精品福利| 久久久久国产一级毛片高清牌| 麻豆成人av在线观看| 久久精品影院6| av片东京热男人的天堂| 一区二区三区激情视频| 男女之事视频高清在线观看| 日韩大尺度精品在线看网址| 国产不卡一卡二| 欧美成狂野欧美在线观看| 国产成人精品久久二区二区免费| 国产精品久久久久久精品电影| 精品熟女少妇八av免费久了| 国语自产精品视频在线第100页| 天堂动漫精品| 色av中文字幕| 18禁观看日本| a在线观看视频网站| 一区二区三区国产精品乱码| 黄色女人牲交| 久久久国产成人精品二区| 免费av不卡在线播放| 欧美黄色片欧美黄色片| 狂野欧美激情性xxxx| 亚洲中文字幕日韩| 免费观看人在逋| 国产av一区在线观看免费| 一区二区三区高清视频在线| av视频在线观看入口| 最新美女视频免费是黄的| 婷婷精品国产亚洲av在线| 69av精品久久久久久| 国产在线精品亚洲第一网站| 美女免费视频网站| 午夜精品在线福利| 亚洲最大成人中文| АⅤ资源中文在线天堂| 欧美中文综合在线视频| 久久欧美精品欧美久久欧美| 丝袜人妻中文字幕| 不卡av一区二区三区| 18禁国产床啪视频网站| 免费看日本二区| 国产视频一区二区在线看| 国产成人福利小说| 日本在线视频免费播放| 国产三级在线视频| 好男人在线观看高清免费视频| 无限看片的www在线观看| 99热只有精品国产| 成年女人毛片免费观看观看9| 黄色日韩在线| 国产精品自产拍在线观看55亚洲| 亚洲人成网站高清观看| av国产免费在线观看| tocl精华| 国产精品98久久久久久宅男小说| 国产精品精品国产色婷婷| 日韩欧美在线二视频| 好男人电影高清在线观看| 搡老妇女老女人老熟妇| 国产av麻豆久久久久久久| 美女午夜性视频免费| 亚洲欧美日韩高清专用| 国产精品99久久99久久久不卡| 视频区欧美日本亚洲| 日韩欧美国产在线观看| 美女午夜性视频免费| 亚洲人成电影免费在线| 久久精品91蜜桃| 国产精品自产拍在线观看55亚洲| 午夜精品一区二区三区免费看| 18禁黄网站禁片免费观看直播| 最近在线观看免费完整版| 国内精品一区二区在线观看| 国产精品精品国产色婷婷| 九色国产91popny在线| 欧美又色又爽又黄视频| 欧美色视频一区免费| 岛国在线观看网站| 俄罗斯特黄特色一大片| 国产激情偷乱视频一区二区| 色吧在线观看| 午夜成年电影在线免费观看| 国产精品影院久久| 一级作爱视频免费观看| 99视频精品全部免费 在线 | 亚洲人成网站在线播放欧美日韩| 久久久久久久精品吃奶| 国产精品乱码一区二三区的特点| 欧美性猛交黑人性爽| 丰满人妻熟妇乱又伦精品不卡| 亚洲精品一区av在线观看| 亚洲色图 男人天堂 中文字幕| 精品久久久久久久人妻蜜臀av| 一级黄色大片毛片| 少妇人妻一区二区三区视频| 久久精品综合一区二区三区| 在线免费观看不下载黄p国产 | 熟女电影av网| 国产一区二区激情短视频| 日韩有码中文字幕| 小蜜桃在线观看免费完整版高清| 国产av在哪里看| 51午夜福利影视在线观看| 国产精品永久免费网站| 久久精品91蜜桃| 变态另类丝袜制服| 国产亚洲欧美98| 一二三四在线观看免费中文在| 亚洲av美国av| 99久久精品国产亚洲精品| 国产精品一区二区三区四区免费观看 | 精品国产超薄肉色丝袜足j| 亚洲国产欧美一区二区综合| 成年版毛片免费区| 亚洲精华国产精华精| www日本在线高清视频| 少妇裸体淫交视频免费看高清| 人人妻人人看人人澡| 亚洲精品色激情综合| av黄色大香蕉| 18禁国产床啪视频网站| 亚洲一区高清亚洲精品| 欧美三级亚洲精品| 青草久久国产| 欧美日韩精品网址| 变态另类丝袜制服| svipshipincom国产片| 老熟妇仑乱视频hdxx| 欧美黄色片欧美黄色片| 一进一出抽搐动态| 欧美精品啪啪一区二区三区| 国内毛片毛片毛片毛片毛片| 欧美乱色亚洲激情| 91在线精品国自产拍蜜月 | 18禁黄网站禁片免费观看直播| 亚洲一区高清亚洲精品| 搡老岳熟女国产| 欧美3d第一页| 精华霜和精华液先用哪个| 成人鲁丝片一二三区免费| 国产男靠女视频免费网站| 国产麻豆成人av免费视频| 免费在线观看影片大全网站| 精品熟女少妇八av免费久了| 此物有八面人人有两片| 在线看三级毛片| 婷婷精品国产亚洲av在线| 18禁黄网站禁片午夜丰满| 亚洲成人久久性| 国产淫片久久久久久久久 | 久久精品国产99精品国产亚洲性色| 色噜噜av男人的天堂激情| 国产高清有码在线观看视频| 久久精品国产综合久久久| 亚洲精品粉嫩美女一区| 美女 人体艺术 gogo| 亚洲中文字幕一区二区三区有码在线看 | 在线a可以看的网站| 叶爱在线成人免费视频播放| 国产欧美日韩精品一区二区| 十八禁网站免费在线| 欧美另类亚洲清纯唯美| 婷婷六月久久综合丁香| 久99久视频精品免费| 全区人妻精品视频| cao死你这个sao货| 白带黄色成豆腐渣| 听说在线观看完整版免费高清| 亚洲男人的天堂狠狠| 丁香欧美五月| 99久久精品国产亚洲精品| 久久欧美精品欧美久久欧美| 这个男人来自地球电影免费观看| 久久国产精品影院| 午夜福利视频1000在线观看| 手机成人av网站| cao死你这个sao货| 天堂网av新在线| 国产主播在线观看一区二区| 久久久久精品国产欧美久久久| 色在线成人网| 人妻久久中文字幕网| 亚洲av成人不卡在线观看播放网| 午夜精品久久久久久毛片777| 黄色 视频免费看| 男女之事视频高清在线观看| 天天一区二区日本电影三级| 两个人视频免费观看高清| 少妇的逼水好多| 露出奶头的视频| netflix在线观看网站| 深夜精品福利| 一级毛片精品| 露出奶头的视频| 亚洲av成人精品一区久久| 男女床上黄色一级片免费看| 看片在线看免费视频| 五月玫瑰六月丁香| 国产欧美日韩精品一区二区| 国产精品日韩av在线免费观看| 九色国产91popny在线| 久久久久久久久久黄片| 桃色一区二区三区在线观看| 日韩成人在线观看一区二区三区| 成人亚洲精品av一区二区| 波多野结衣高清无吗| 欧美色欧美亚洲另类二区| 国产三级在线视频| 精品一区二区三区四区五区乱码| 最好的美女福利视频网| 99热6这里只有精品| 婷婷精品国产亚洲av在线| 亚洲色图 男人天堂 中文字幕| 人妻夜夜爽99麻豆av| 性色avwww在线观看| 免费在线观看视频国产中文字幕亚洲| 亚洲aⅴ乱码一区二区在线播放| 国产熟女xx| 免费观看人在逋| 看黄色毛片网站| 俄罗斯特黄特色一大片| 中文字幕av在线有码专区| 国产私拍福利视频在线观看| 午夜福利成人在线免费观看| 中文亚洲av片在线观看爽| 欧美乱妇无乱码| av天堂中文字幕网| 男人的好看免费观看在线视频| 白带黄色成豆腐渣| 91av网站免费观看| 成年版毛片免费区| 69av精品久久久久久| 亚洲欧美精品综合一区二区三区| 嫩草影院入口| 一二三四在线观看免费中文在| 18禁裸乳无遮挡免费网站照片| 精品国产超薄肉色丝袜足j| 亚洲五月婷婷丁香| 日韩中文字幕欧美一区二区| 精品免费久久久久久久清纯| 变态另类成人亚洲欧美熟女| a在线观看视频网站| 老熟妇乱子伦视频在线观看| 51午夜福利影视在线观看| 日韩欧美在线乱码| netflix在线观看网站| 噜噜噜噜噜久久久久久91| 日日夜夜操网爽| АⅤ资源中文在线天堂| 日韩大尺度精品在线看网址| 色综合站精品国产| 成人永久免费在线观看视频| 免费人成视频x8x8入口观看| 国产极品精品免费视频能看的| 一个人观看的视频www高清免费观看 | 不卡av一区二区三区| 欧美激情在线99| 久久精品91蜜桃| 亚洲国产欧美一区二区综合| 日本黄大片高清| 无人区码免费观看不卡| 99国产极品粉嫩在线观看| 国产v大片淫在线免费观看| 男女做爰动态图高潮gif福利片| 国产成人啪精品午夜网站| 色吧在线观看| 国产1区2区3区精品| 国产午夜精品久久久久久| 在线观看舔阴道视频| 亚洲av熟女| 欧美日韩精品网址| 给我免费播放毛片高清在线观看| 亚洲电影在线观看av| 可以在线观看的亚洲视频| 一进一出抽搐gif免费好疼| 久久性视频一级片| 国产精品久久久久久亚洲av鲁大| 很黄的视频免费| www日本黄色视频网| 在线看三级毛片| 精品人妻1区二区| 校园春色视频在线观看| 在线观看一区二区三区| 日本一本二区三区精品| 在线免费观看的www视频| 少妇的逼水好多| av在线天堂中文字幕| av福利片在线观看| 日本 欧美在线| av在线蜜桃| 久久久国产成人免费| 午夜福利在线观看免费完整高清在 | 国产亚洲av高清不卡| 99精品久久久久人妻精品| 欧美一区二区精品小视频在线| 国产单亲对白刺激| 亚洲精品色激情综合| 两个人看的免费小视频| 成在线人永久免费视频| 国产精品一区二区精品视频观看| 午夜免费激情av| 女人高潮潮喷娇喘18禁视频| 日韩欧美在线二视频| 久久久成人免费电影| 成人欧美大片| 国产黄a三级三级三级人| 欧美乱码精品一区二区三区| 中亚洲国语对白在线视频| 亚洲欧美精品综合一区二区三区| 亚洲色图 男人天堂 中文字幕| 身体一侧抽搐| 国产精品久久视频播放| 成人无遮挡网站| 看片在线看免费视频| 久久中文看片网| 在线播放国产精品三级| 色在线成人网| 夜夜夜夜夜久久久久| 精品国产超薄肉色丝袜足j| 夜夜爽天天搞| 91九色精品人成在线观看| 一个人免费在线观看的高清视频| 国内久久婷婷六月综合欲色啪| 亚洲精品美女久久久久99蜜臀| 99国产精品一区二区三区| 亚洲aⅴ乱码一区二区在线播放| 成人av一区二区三区在线看| 热99re8久久精品国产| 最新中文字幕久久久久 | 一区二区三区国产精品乱码| 狠狠狠狠99中文字幕| 网址你懂的国产日韩在线| 国产真实乱freesex| 亚洲专区中文字幕在线| 国产精品久久电影中文字幕| 九九热线精品视视频播放| 香蕉久久夜色| 男人和女人高潮做爰伦理| 日韩欧美精品v在线| 午夜久久久久精精品| 亚洲精品美女久久av网站| 亚洲国产精品合色在线| av女优亚洲男人天堂 | 欧美一级毛片孕妇| 欧美av亚洲av综合av国产av| svipshipincom国产片| 欧美日韩中文字幕国产精品一区二区三区| 曰老女人黄片| 午夜福利视频1000在线观看| 成人三级黄色视频| 淫妇啪啪啪对白视频| 亚洲国产欧美人成| 国产黄片美女视频| 在线看三级毛片| 国产又色又爽无遮挡免费看| 美女被艹到高潮喷水动态| 日韩av在线大香蕉| 国产成+人综合+亚洲专区| 后天国语完整版免费观看| 欧美在线黄色| 天堂网av新在线| 我要搜黄色片| 不卡一级毛片| 特大巨黑吊av在线直播| 国产精品久久久久久精品电影| 在线观看舔阴道视频| 亚洲五月婷婷丁香| 国产av在哪里看| 最新美女视频免费是黄的| 午夜福利免费观看在线| 在线十欧美十亚洲十日本专区| 麻豆成人午夜福利视频| 午夜福利在线观看免费完整高清在 | 免费电影在线观看免费观看| 日本一本二区三区精品| 亚洲国产欧洲综合997久久,| 免费在线观看日本一区| 一进一出好大好爽视频| 最新在线观看一区二区三区| 一进一出抽搐gif免费好疼| 又大又爽又粗| 熟女电影av网| 久久午夜亚洲精品久久| 夜夜看夜夜爽夜夜摸| 这个男人来自地球电影免费观看| 亚洲熟女毛片儿| 亚洲欧美日韩卡通动漫| 色老头精品视频在线观看| 天天躁日日操中文字幕| 国产精品一区二区三区四区免费观看 | avwww免费| 国产日本99.免费观看| 18禁观看日本| 亚洲一区高清亚洲精品| 亚洲国产看品久久| 色综合欧美亚洲国产小说| 日韩欧美 国产精品| 成人av在线播放网站| 国产欧美日韩精品一区二区| 97人妻精品一区二区三区麻豆| 亚洲欧美激情综合另类| 午夜福利在线在线| 亚洲人与动物交配视频| 一二三四在线观看免费中文在| 老鸭窝网址在线观看| 国产欧美日韩精品一区二区|