• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Gate-Attention and Dual-End Enhancement Mechanism for Multi-Label Text Classification

    2023-12-15 03:57:06JierenChengXiaolongChenWenghangXuShuaiHuaZhuTangandVictorSheng
    Computers Materials&Continua 2023年11期

    Jieren Cheng,Xiaolong Chen,Wenghang Xu,Shuai Hua,Zhu Tang and Victor S.Sheng

    1School of Computer Science and Technology,Hainan University,Haikou,570228,China

    2Hainan Blockchain Technology Engineering Research Center,Hainan University,Haikou,570228,China

    3School of Cyberspace Security,Hainan University,Haikou,570228,China

    4Department of Computer Science,Texas Tech University,Lubbock,79409,USA

    ABSTRACT In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in semantic feature extraction have turned to external knowledge to augment the model’s grasp of textual content,often overlooking intrinsic textual cues such as label statistical features.In contrast,these endogenous insights naturally align with the classification task.In our paper,to complement this focus on intrinsic knowledge,we introduce a novel Gate-Attention mechanism.This mechanism adeptly integrates statistical features from the text itself into the semantic fabric,enhancing the model’s capacity to understand and represent the data.Additionally,to address the intricate task of mining label correlations,we propose a Dual-end enhancement mechanism.This mechanism effectively mitigates the challenges of information loss and erroneous transmission inherent in traditional long short term memory propagation.We conducted an extensive battery of experiments on the AAPD and RCV1-2 datasets.These experiments serve the dual purpose of confirming the efficacy of both the Gate-Attention mechanism and the Dual-end enhancement mechanism.Our final model unequivocally outperforms the baseline model,attesting to its robustness.These findings emphatically underscore the imperativeness of taking into account not just external knowledge but also the inherent intricacies of textual data when crafting potent MLTC models.

    KEYWORDS Multi-label text classification;feature extraction;label distribution information;sequence generation

    1 Introduction

    Today,Artificial Intelligence technology is in the ascendant,and Natural Language Processing(NLP) is also growing rapidly.In the era of big data explosion,text classification,as one of the fundamental tasks in the field of NLP,has received a lot of attention based on the urgent demand of human beings for efficient text information processing techniques.Text classification [1] refers to classifying a given text according to a preset label.This text can be a sentence,a paragraph,or even a document.Text classification is also an important part of docking downstream tasks such as information retrieval[2],topic division[3],and question-answering systems[4]in the field of NLP.As one of the complex scenarios in text classification,Multi-Label Text Classification(MLTC)needs to take into account the correlation between text feature extraction and mining labels.

    In recent years,with the introduction of the Sequence Generation Model(SGM)[5],the research paradigm of Sequence-to-Sequence has been widely adopted in the field of MLTC.In this framework,the model is split into two parts:Encoder and Decoder.The Encoder module is dedicated to extracting semantic features,and the Decoder module is dedicated to mining the correlations between labels and classifying them.Currently,there is a growing body of research on semantic feature extraction to enhance the model’s understanding of text by introducing exogenous knowledge.But the problem with such exogenous knowledge is that it inevitably brings noise along with new knowledge to the model.If the noise is not handled properly,it can backfire.However,this problem can be effectively solved by exploiting some inherent and intrinsic information of the text itself,such as statistical features.Compared with exogenous knowledge,this endogenous knowledge has the advantage of being naturally compatible with the corresponding classification tasks[6].However,there are incompatibility issues between statistical features and semantic features in terms of scale and dimensionality,and not all the information in statistical features is worthy to be referenced by semantic features,so a highquality fusion strategy is needed to combine the two features.In addition,sequence generation models are often used to mine the correlations between labels.SGM proposes that the multi-label classification problem can be transformed into a sequence generation problem,to effectively mine the correlations between labels.However,there are problems of information loss and wrong propagation [7] when decoding text feature vectors in this way,which brings certain troubles for the model to continuously generate correct labels.

    To address the above issues,we propose a VFS model composed of V-Net,F-Net and S-Net,where V-Net refers to the Variational Encoding Network,F-Net refers to Feature Adaptive Fusion Network,and S-Net refers to the Sequence Enhancement Generation Model.We draw inspiration from Adaptive Gate Network(AGN)[6]and design V-Net and F-Net,which can better adapt to MLTC tasks.In VNet,we reconstruct the original label statistical features and map them into a continuous vector space,which also address the problem of the mismatch between the original label statistical features and the semantic feature dimensions.In the F-Net module,we propose a Gate-Attention mechanism to enable statistical and semantic features to be fused across scales,and to reallocate attention weights during fusion,allowing statistical information that is not worth learning from the current semantic features to be released weights to more important statistical information.Compared to other fusion strategies,the Gate-attention mechanism enables the model to autonomously discern information from statistical features,thus reducing noise interference.In the S-Net module,we proposed a Dualend enhancement mechanism,which introduces original hidden vectors to the input end of Long Short Term Memory(LSTM)cells for reference,and uses an attention mechanism to enhance the weight of important information on the output end,effectively alleviating the problem of information loss and error transmission during LSTM propagation.The main contributions of this paper are as follows:

    ? We propose a novel label distribution information extraction module,which can fully capture the mapping relationship between label and text,and thus form a unique distributed representation of text.

    ? We design a feature fusion strategy,which integrates the label distribution information into the original semantic feature vector of the text based on the attention mechanism.

    ? A large number of experiments have been carried out on two datasets,and the experimental results fully prove the effectiveness of our proposed framework.

    ? We propose a novel label sequence generation module,which transforms the multi-label classification problem into a label sequence generation problem and fully exploits the correlation between labels.

    2 Related Work

    2.1 Feature Extraction

    Extracting and fusing features from multiple views can help models understand the text from multiple perspectives and at a deeper level,which is also a mainstream idea in current feature extraction research.Currently,most scholars rely on information other than the input text to assist model understanding of semantics,such as Chinese Pinyin,Chinese radicals,and English parts of speech.In Chinese,Liu et al.[8] used the pinyin of Chinese characters to assist the model in understanding Chinese,while Tao et al.[9]used the association of Chinese characters to obtain information that can assist the model in understanding the text.Liu et al.[10]also fused the three characteristics of Chinese characters:font shape,font sound,and font meaning.Hong et al.[11]even calculated the similarity between characters by using strokes and sounds based on characters.In English,Li et al.[6]designed a statistical information vocabulary based on the part of speech of English words,and used it to complete deep level feature extraction of text.In addition to these,Chen et al.[12] introduced conceptual information and entity links from the knowledge base into the model pipeline through an attention mechanism.Li et al.[13]combined domain knowledge and dimension dictionaries to generate wordlevel sentiment feature vectors.Zhang et al.[14] improved fine-grained financial sentiment analysis tasks by combining statistical distribution methods with semantic features.Li et al.[15] improved emotion-relevant classification tasks by combining fine-grained emotion concepts and distribution learning.Li et al.[16]enabled the extraction of global semantics at both token-level and documentlevel by redesigning the self-attention mechanism and recurrent structure.Li et al.[17] addressed the challenge of potential inter-class confusion and noise caused by using coarse-grained emotion distribution by generating fine-grained emotion distributions and utilizing them as model constraints.However,these efforts rarely focus on the necessity and compatibility of adding information,so it is impossible possible to avoid bringing noise while bringing new knowledge to the model.

    2.2 Multi-Label Text Classification

    There are two types of solutions for mining the association between labels.One is based on problem transformation,which mainly transforms the data of the problem and ultimately makes it applicable to existing algorithms designed for single label classification.For example,the Binary Relevance(BR)algorithm was proposed by Boutell et al.[18],but due to not mining the correlation between labels,the classification efficiency is low.Thereafter,Read et al.[19] proposed a Classifier Chain(CC)to address this drawback.This model links all the classifiers that come before it in a chain,allowing a single trainer to train on the input space and classifiers in the chain.The Label Powerset(LP)algorithm proposed by Tsoumakas et al.[20]converts all different subsets of category labels into different categories for training.The other category is based on applicable algorithms.This category of algorithms is mainly an improvement over existing algorithms designed for single label classification,making them applicable to MLTC.Chen et al.[21]proposed a model that extracts text feature vectors from text using Convolutional Neural Network(CNN),and then sends these vectors to a Recurrent Neural Network(RNN)to output labels,named CNN-RNN.Yang et al.[5]proposed the SGM model by introducing the attention mechanism into the Sequence-to-Sequence(Seq2Seq)model and applying it to MLTC.Later,Yang et al.[22]made improvements to SGM by adding a Set Decoder module to reduce the impact of incorrect labels.Chen et al.[23]designed a MLTC model with Latent Word-Wise Label Information(MLC-LWL)to eliminate the effects of predefined label order and exposure bias in the Sequence-to-Set(Seq2Set).In terms of classification performance,models such as Seq2Seq are more advantageous.

    3 Model

    In this section,we will introduce the implementation details of the VFS model in detail.The overall framework is shown in Fig.1.

    Figure 1:The overall framework of the proposed VFS

    3.1 Problem Definition

    MLTC refers to finding a matching subset of a text in a label set.Mathematically,give a set of text samplesT={t1,t2,...,tm},and a set of labelsL={l1,l2,...,ln},the goal is to learn a mapping function f: T →2L,where 2Lrepresents the power set ofL,which contains all possible label combinations.For each text sample f: T →2L,the functionfpredicts a set of labels f: T →2L,which may contain zero or more labels.

    3.2 V-Net:Variational Encoding Network

    Due to the discrete nature of the initial label statistical features in the vector space,it is difficult to represent the statistical features in depth,and their dimensions do not match the semantic features.Therefore,we designed V-Net to reconstruct the original label statistical features to obtain a statistical feature that matches the semantic feature dimension and has deep level information.The frame diagram is shown in Fig.2.

    Figure 2:V-Net and F-Net frame diagram

    The contribution of different words in a text to the semantics of the text varies,and the contribution of a word in different texts to the semantics of the text may also differ.Some words in the text are associated with the corresponding labels of the text,which means that when the probability of a word appearing on a label is high or low,the word can be considered to contribute significantly to the label classification of the sentence.We first define a textTi={w1,w2,...,wc}with a length of c,which corresponds to a setLi={l1,l2,...,ld} containing d labels.After stacking the statistics in order,we can obtain a Table of Label Frequency(ToLF)corresponding to all words.We can obtain a vectorξw=[ξ1,ξ2,...,ξn]representing a word and a vectorζl=[ζ1,ζ2,...,ζm]representing a label from ToLS,wherenandmboth represent dimensions.

    Not all high-frequency words contribute significantly to the semantics of a text,so we will first filter these words.We believe that a word with semantic contribution should have a normal distribution over all texts,so words that do not belong to the normal distribution will be filtered out by us first and will not be used subsequently.

    The vector dimensions of the original statistical features do not match the semantic features of the text,and the vectors constructed based on this positional relationship are difficult to represent finegrained semantics.For this reason,we use an Auto-Encoder to reduce the dimension of the original distributed representation vector.However,in order to make the distribution of the feature vectors more consistent with the real scene and reduce the interference of noise,we use a Variational Auto-Encoder(VAE)[24]to achieve this process.

    If the statistical vector of a label is known to beζl=[ζ1,ζ2,...,ζm],then the statistical matrix of all labels isζL=[ζl1,ζl2,...,ζln],wherenrepresents the dimension ofζL∈Rm×n.Unlike ordinary Auto-Encoder,VAE becomes a model that fits the probability distribution.Assuming that the intermediate vectorzfollows a standard multivariate Gaussian distribution,Irepresents the identity matrix,and the calculation process is shown in formula(1):

    So for VAE,the encoder samples an intermediate vectorzfrom the prior distributionp(z),and then the decoder samples thefrom the posterior distributionp(X|z)according to the intermediate vectorz.In order to facilitate the learning and training of the neural network,θis parameterized,then the calculation process of the decoder model is shown in formula(2):

    whereμrepresents the mean andσrepresents the standard deviation.For the encoder,its task is mainly to fit a distributionpθ(X)close to the real distributionp(X).Thenpθ(X)is:

    However,if a large number ofzjare sampled fromp(z) to obtainpθ(X),the requirements for the vector dimensions ofXandZare too high,which is not suitable for neural network training.Therefore,we can assume a posterior distributionpθ(z|X),and getpθ(z|X)according to the Bayesian formula:

    However,for the denominator in the above formula,it is still necessary to sample a large number ofzjfromp(z),so theΦparameterized encoder is fit thepΦ(z|X) distribution to approximate thepθ(z|X) distribution.In addition,becausepθ(X|z) andp(z) both obey the multivariate Gaussian distribution,it can be obtained that the posterior distributionpθ(z|X) also obeys the multivariate Gaussian distribution.So:

    However,the neural network cannot backpropagate the sampling function when training the model through the loss function,so it is necessary to sample aneifrom the standard multivariate Gaussian distributionN(0,I)first,and then calculatezi:

    where ⊙represents the element-wise product operation.

    This module is trained independently in the entire model,and only the intermediate vector Z needs to be taken out for subsequent use in this paper.The input to the VAE isζt,so that the feature matrixEL∈Rm×Drepresenting the label can be obtained,whereDrepresents the reconstructed statistical feature vector dimension.

    3.3 F-Net:Feature Adaptive Fusion Network

    The F-Net needs to complete the extraction of text semantics and fuse it with statistical features from the V-Net.Due to the scale incompatibility between statistical and semantic features,and the presence of noise in statistical features.To this end,we designed a Gate-Attention mechanism to assign weights to statistical features and filter them.After weighted summation,we obtain a feature vector that can represent the text with high quality.Finally,we will perform vector stitching on both.The frame diagram is shown in Fig.2.

    First of all,we extract feature vectors of the input text via a bidirectional LSTM[25].Timing datawtof thet-th time step will be passed into two LSTM units.Therefore,we can obtain hidden vectors from both directions of output:

    Therefore,we can obtain the final hidden representation of thet-th time step by concatenating the hidden states from both directions,yt=and the future matrix of the entire text,ET=[y1,y2,...,yc],wherecdenotes the length of text,yc∈R1×Ddenotes the last hidden state,andDdenotes the dimension semantic features.

    We propose a Gate-Attention mechanism that combines statistical features with semantic features.We regardycasquery,ELaskeyandvalueat the same time to implement the attention mechanism.First,We obtain the attention weight for eachdenotes the?-th vector inEL(1 ≤?≤m).

    whereα′∈R1×mdenotes the attention weight.Besides,fdenotes the distance function which is stated as an element-wise dot product operation in this paper.Then,we obtainα=[α1,...,α?,...,αm]via to normalizeα′with thesoftmaxfunction:

    However,in order to reduce the impact of irrelevant labels in understanding text,we have designed a gate mechanism.Under this mechanism,labels whose contribution cannot reach the threshold will be released with a weight,and this weight will be assigned to other labels.

    whereγand?both denote hyper-parameters.Besides,sigmoid(?) denotes the threshold value at which the contribution meets the requirements and exp(γ)denotes the compensation of the model for satisfying the statistical future.Finally,Gatedenotes the gate function as a filter to extract necessary information.

    whereα?denotes the?-th dimensional value ofα∈R1×m(1 ≤?≤m).

    Thereafter,in order to systematically integrate the vectors about text representation obtained by these two methods,yCandare concatenated.

    whereY∈R1×2Drepresents the direction after concatenating,and this has the advantage of retaining all information[26].Then,the potential correlation betweenycandis learned through a fully connected layer neural network,and its dimension is reduced toD:

    3.4 S-Net:Sequence Enhancement Generation Network

    After obtaining the feature vectorycontaining statistical information,it is necessary to parse it through LSTM and assign appropriate labels.To address the problem of error transmission and information loss during LSTM parsing,we designed a Dual-end enhancement mechanism to enhance the information at both the input and output ends of LSTM.The overall structure of the model is shown in Fig.3.

    Figure 3:S-Net structure diagram

    First,we will equally share the feature vectoryfrom the F-Net with each LSTM unit,which can reduce the erroneous impact of hidden information from the previous layer.

    whereLt-1denotes an embedded representation of the label output from the previous layer,tdenotes thet-th time step.

    Second,we also enhanced the output of each LSTM unit.We use the Attention mechanism to refer different labels to different important words.This model will be used to obtain the future matrixETfrom F-Net asqueryandvalue,hidden statehtof improved LSTM unit as thekey.Therefore,we can obtain the attention weight representationβt:

    whereET∈Rc×Dneeds to be transposed first.Afterward,we can obtain the attentive representationthrough attentive weighted sum as:

    Compared toht,Htincreases the reference to important words in the understanding of labels,which can reduce the impact of insufficient information transmission at the upper level.After that,Htis passed into the fully connected neural network to further learn the deep connection betweenandht,and the corresponding label is output through thesoftmaxfunction.

    4 Structure

    4.1 Dataset Description

    This experiment uses two publicly available English datasets,AAPD and RCV1-2,to train and test the model.Each dataset will be divided into three parts:training set,verification set,and test set.The AAPD dataset is a collection of 55840 abstracts and corresponding subject categories collected and collated by Li et al.[6]on the internet,with a total of 54 labels,which can predict the corresponding subject of academic papers based on a given summary.The RCV1-2 dataset is from a Reuters news column,compiled and collected by Lewis et al.[27].With a total of 804414 news stories,each news story is assigned multiple themes,with a total of 103 themes.The details of the two datasets are given in Table 1.Including Ntraintraining set is the total number of samples,Ntestis testing samples,total L is the total number of labels,is label number,average every sample haveis average each label has a label number,Wtrainis the average number of words,each training set sample Wtestis test sample average word count.

    Table 1: Details of the datasets

    To test the effect of the model on texts with different numbers of labels,the label distributions of AAPD and RCV1-2 were also calculated,and the results were shown in Fig.4.

    Figure 4:Dataset label distribution

    4.2 Experimental Details

    We set the sample length of the training set to 500,fill in<pad>if this is not enough,and cut the rest The AAPD vocabulary is 30,000 in length and the RCV1-2 vocabulary is 50,000 in length.The word embedding dimension D is set to 256,the length of the V-Net intermediate vector is set to 256,the length of Bi-LSTM for the F-Net is set to 500,and the length of LSTM for the S-Net is set to 10.To prevent overfitting,the dropout mechanism is used with the drop rate of 0.5.Adam optimizer was used,and the learning rate was 0.001.Finally,the V-Net is trained separately and the results are screened for subsequent use.

    4.3 Comparison Methods

    We compare our proposed method with the following baselines:

    ?BR[13]:This method converts multi-label classification into multiple binary classification tasks and trains the binary classifier for each label.

    ?CC[14]:This method converts multi-label classification into a chain binary problem.

    ?LP[15]:Treats each label combination as a new class and transforms the MLTC problem into a multi-class classification.

    ?CNN-RNN[16]:The model uses CNN to capture local features of text,RNN to capture global features,and finally fuses into a feature vector that contains both types of information.

    ?SGM [5]:The method is a sequentially generated model that uses the LSTM-based Seq2Seq model with an attention mechanism,while the decoding phase uses global embedding to obtain inter-label dependencies.

    ?SGM with Global Embedding (SGM-GE) [5]:Employs the same sequence-to-sequence model as SGM with a novel decoder structure to tackle the MLTC problem.

    ?Seq2Set[17]:Improvements have been made to SGM,including a Set Decoder module to reduce the impact of mislabeling.

    ?Multi-Label Reasoner (ML-Reasoner) [28]:This model designs a multi label classification algorithm based on reasoning,reducing the dependence of the model on label order.

    ?Seq2Seq Model with a Different Label Semantic Attention Mechanism (S2S-LSAM) [29]:This model generates fusion information containing label and text information through the interaction between label semantics and text features in the label semantic attention mechanism.

    ?Spotted Hyena Optimizer with Long Short Term Memory (SHO-LSTM) [30]:The Spotted Hyena Optimizer algorithm is used to optimize the LSTM network.

    ?MLC-LWL[18]:This model uses the topic model of labels to construct effective word-by-word label information and combines the label information carried by words with the label context information through a gated network.

    ?Label-Embedding Bi-Directional Attentive(LBA)[31]:The paper proposes a Label-Embedding Bi-Directional Attentive model by fully leveraging fine-grained token-level text representations and label embeddings.

    ?Counter Factual Text Classifier(CFTC)[32]:The paper achieves causality-based predictions by eliminating correlation bias in MLTC tasks,significantly improving the model’s performance,and effectively eliminating correlation bias in the datasets.

    4.4 Experimental Results

    4.4.1 Comparative Experiments

    We compared the proposed the VFS model with all baseline models on the AAPD dataset and the RCV1-2 dataset,and the results are shown in Table 2.The results show that our proposed model has achieved excellent performance,with the best performance in three indicators.On the AAPD dataset,our proposed the VFS model achieves a reduction of 5.55% hamming-loss and an improvement of 1.41%micro-F1score over the best model MLC-LWL in baselines.Although our model is 7.03%microprecision score less than MLC-LWL,achieves an improvement of 2.17%over the model SHO-LSTM.We get the results of the proposed method and the baselines on the RCV1-2 test set.Similar to the experimental results on the AAPD test set,the VFS model achieves a reduction of 8.22%hammingloss and an improvement of 0.68%micro-F1score over the model MLC-LWL.Based on these results,the significant advantages of our proposed model can be fully demonstrated.Where HL,P,R and F1 denote hamming-loss [33],micro-precision,micro-recall and micro-F1[34].In addition,the symbol“+” denotes that the higher the value is,the better the model performs.The symbol “-” and the symbol“+”indicate opposite meanings.

    Table 2: Comparison between our methods and all baselines on two datasets

    4.4.2 Ablation Experiment

    In addition,we used the classic model SGM in the field of MLTC as the baseline model,and compared the Encoder of the SGM model with VF and Decoder with S-Net,respectively.The results are shown in Fig.5.From the figure,it can be seen that replacing Encoder with VF and Decoder with S-Net can improve the effect of the SGM model,and the combination of VF and S-Net has the best effect.This fully demonstrates the respective effectiveness of VF and S-Net.

    Figure 5:Comparison diagram of ablation experiment

    4.4.3 Analysis of Label Length Impact

    In order to explore the impact of label length on experimental results,we selected samples with label lengths of 2 to 7 from the RCV1-2 test set and tested them on models SGM and VFS,respectively.The results are shown in Fig.6.From the figure,it can be seen that both models achieve optimal results when the label length is 3,whether it is HL or F1.Since then,as the label length increases,the model effect has become worse,indicating that the longer the label length,the greater the difficulty of classification.However,it can also be seen that the performance degradation of the VFS model is lower than that of SGM when faced with an increase in labels.This indicates that the VFS has better robustness than SGM.

    4.4.4 Analysis of Attention Weight Distribution

    The S-Net model can allow words that contribute more to the semantics of text to receive more attention and give them greater weight.At the same time,the weight can also reflect differences when faced with different labels.The thermal distribution table of the attention weight section is shown in Table 3.From Table 3,it can be seen that when the VFS model predicts the“cs.CV”label,the words“visual”and“movie”have gained more attention from the model,while when predicting the“cs.CL”label,the words“presence”,“LSTM”,and“verb”have gained more attention from the model.This shows that our proposed model can automatically assign greater weight to words that can contribute more semantic information,and there are differences in the consideration of different labels and key words in the text.

    Table 3: Visualization of attention weight distribution

    Figure 6:Comparison of effects on labels of different lengths

    5 Conclusion

    In this paper,we propose a novel fusion strategy that combines statistical features with semantic features in a high-quality manner to solve the problem of mismatching between statistical and semantic features in terms of scale and dimension.Secondly,we propose an information enhancement mechanism to effectively alleviate the problems of information loss and incorrect transmission in LSTM networks.A large number of experimental results show that our proposed model is significantly superior to the baseline.Further analysis shows that our model can effectively capture the semantic contributions of important words.In future work,we plan to explore additional types of statistical features and apply them to tasks such as named entity recognition and even image classification.Although our proposed model can alleviate the impact of the increase in the number of labels to some extent,it is still difficult to cope with the prediction task of a large number of labels.Further exploration is needed in this area in the future.

    Acknowledgement:None.

    Funding Statement:This work was supported by National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024),the Key Research and Development Program of Hainan Province (Grant Nos.ZDYF2020040,ZDYF2021GXJS003),the Major Science and Technology Project of Hainan Province (Grant No.ZDKJ2020012),Hainan Provincial Natural Science Foundation of China(Grant Nos.620MS021,621QN211),Science and Technology Development Center of the Ministry of Education Industry-University-Research Innovation Fund(2021JQR017).

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design: Xiaolong Chen,Jieren Cheng;data collection: Xiaolong Chen,Wenghang Xu,Shuai Hua;analysis and interpretation of results:Xiaolong Chen,Zhu Tang;draft manuscript preparation:Xiaolong Chen,Victor S.Sheng.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data that support the findings of this study are available on request from the corresponding author,Xiaolong Chen,upon reasonable request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久精品国产清高在天天线| www.精华液| 一级毛片女人18水好多| 一级黄色大片毛片| 国产主播在线观看一区二区| 香蕉丝袜av| 国产欧美日韩综合在线一区二区| xxxhd国产人妻xxx| 身体一侧抽搐| 亚洲久久久国产精品| 久久天躁狠狠躁夜夜2o2o| 婷婷丁香在线五月| 手机成人av网站| 国产野战对白在线观看| 午夜亚洲福利在线播放| 精品久久蜜臀av无| 免费在线观看视频国产中文字幕亚洲| 一级a爱片免费观看的视频| 国产亚洲精品一区二区www| 国产欧美日韩一区二区三区在线| 99久久久亚洲精品蜜臀av| 18禁国产床啪视频网站| 午夜福利在线观看吧| 大码成人一级视频| 免费在线观看视频国产中文字幕亚洲| 日韩 欧美 亚洲 中文字幕| 中文字幕人妻丝袜制服| 成人亚洲精品av一区二区 | 成年人免费黄色播放视频| 一级,二级,三级黄色视频| 成熟少妇高潮喷水视频| 国产精品98久久久久久宅男小说| 精品国产一区二区三区四区第35| 老鸭窝网址在线观看| 大码成人一级视频| 久热爱精品视频在线9| 亚洲一区高清亚洲精品| 女人爽到高潮嗷嗷叫在线视频| 日本免费一区二区三区高清不卡 | 午夜免费激情av| 国产精品 欧美亚洲| 久久久久久久久中文| 亚洲精品国产区一区二| 国产亚洲欧美在线一区二区| aaaaa片日本免费| 麻豆一二三区av精品| 国产精品综合久久久久久久免费 | 国产av精品麻豆| 欧美色视频一区免费| 黑人欧美特级aaaaaa片| 国产亚洲精品久久久久久毛片| 精品人妻在线不人妻| 亚洲aⅴ乱码一区二区在线播放 | 国产蜜桃级精品一区二区三区| 久久亚洲真实| 国产精品 欧美亚洲| av免费在线观看网站| av在线播放免费不卡| 欧美成人性av电影在线观看| 国产av一区在线观看免费| 在线十欧美十亚洲十日本专区| 在线视频色国产色| 满18在线观看网站| 国产精品 欧美亚洲| 亚洲欧美精品综合一区二区三区| 亚洲,欧美精品.| 国产成人影院久久av| 国产人伦9x9x在线观看| 国产亚洲精品久久久久久毛片| 亚洲精品一二三| 久久久久国内视频| 黄色a级毛片大全视频| 国产一区二区三区视频了| 日本精品一区二区三区蜜桃| 人人妻人人澡人人看| 一区二区三区国产精品乱码| 亚洲第一av免费看| 亚洲精华国产精华精| 一级片'在线观看视频| 女性被躁到高潮视频| 嫩草影视91久久| 午夜免费观看网址| 国产精品一区二区精品视频观看| 亚洲精品av麻豆狂野| 9热在线视频观看99| www.自偷自拍.com| 国产91精品成人一区二区三区| 国产精品九九99| 日韩三级视频一区二区三区| 久久精品国产亚洲av高清一级| 久99久视频精品免费| √禁漫天堂资源中文www| xxxhd国产人妻xxx| www日本在线高清视频| 嫁个100分男人电影在线观看| 久久人妻福利社区极品人妻图片| 国产亚洲精品久久久久5区| 桃红色精品国产亚洲av| 性欧美人与动物交配| 黄片播放在线免费| 又黄又爽又免费观看的视频| 91字幕亚洲| 免费在线观看完整版高清| 午夜亚洲福利在线播放| 国产极品粉嫩免费观看在线| 国内久久婷婷六月综合欲色啪| 女人爽到高潮嗷嗷叫在线视频| 欧美日韩黄片免| 一区二区日韩欧美中文字幕| 露出奶头的视频| 久久久国产成人免费| 国产片内射在线| 757午夜福利合集在线观看| 午夜免费成人在线视频| 色精品久久人妻99蜜桃| 久久人人爽av亚洲精品天堂| 午夜视频精品福利| 亚洲精品中文字幕一二三四区| √禁漫天堂资源中文www| 亚洲三区欧美一区| 久久久精品欧美日韩精品| 亚洲一区高清亚洲精品| 精品福利观看| 久久香蕉国产精品| 久久久国产成人精品二区 | 国产亚洲精品久久久久5区| 精品久久久久久成人av| 老汉色∧v一级毛片| 美女高潮到喷水免费观看| 老司机午夜十八禁免费视频| 激情在线观看视频在线高清| 成年人免费黄色播放视频| 日本vs欧美在线观看视频| 亚洲自偷自拍图片 自拍| 两个人免费观看高清视频| a在线观看视频网站| 免费观看精品视频网站| 亚洲av成人不卡在线观看播放网| 啪啪无遮挡十八禁网站| 日日摸夜夜添夜夜添小说| 精品人妻1区二区| 亚洲一码二码三码区别大吗| 人妻丰满熟妇av一区二区三区| 久久人人97超碰香蕉20202| 欧美日韩亚洲综合一区二区三区_| 精品日产1卡2卡| 亚洲精品成人av观看孕妇| 黄色视频不卡| 久久久国产一区二区| 亚洲 欧美 日韩 在线 免费| 免费女性裸体啪啪无遮挡网站| 国产成人啪精品午夜网站| 黄网站色视频无遮挡免费观看| 久久久水蜜桃国产精品网| 日韩一卡2卡3卡4卡2021年| 曰老女人黄片| 美女高潮到喷水免费观看| 狂野欧美激情性xxxx| 少妇 在线观看| 国产一区二区三区综合在线观看| 日日爽夜夜爽网站| 男女午夜视频在线观看| 嫁个100分男人电影在线观看| 一级毛片高清免费大全| avwww免费| 欧美精品一区二区免费开放| 国产成+人综合+亚洲专区| 精品人妻在线不人妻| 国产熟女午夜一区二区三区| 超碰成人久久| 国产一区二区三区综合在线观看| www.精华液| 成人av一区二区三区在线看| 一夜夜www| 天堂影院成人在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 真人一进一出gif抽搐免费| 成人永久免费在线观看视频| 亚洲成av片中文字幕在线观看| 97超级碰碰碰精品色视频在线观看| 久久久久国产精品人妻aⅴ院| 亚洲第一青青草原| www国产在线视频色| 一边摸一边做爽爽视频免费| 91av网站免费观看| xxxhd国产人妻xxx| 搡老岳熟女国产| 91国产中文字幕| 搡老熟女国产l中国老女人| 日韩欧美一区视频在线观看| 极品人妻少妇av视频| 欧美不卡视频在线免费观看 | 欧美乱码精品一区二区三区| 免费高清视频大片| 午夜久久久在线观看| 99久久久亚洲精品蜜臀av| 亚洲欧美一区二区三区黑人| 黄色毛片三级朝国网站| 午夜a级毛片| 久久香蕉激情| 一级a爱视频在线免费观看| 久久香蕉国产精品| 丰满迷人的少妇在线观看| 亚洲少妇的诱惑av| 久久午夜亚洲精品久久| 老司机深夜福利视频在线观看| 久久天堂一区二区三区四区| 久久精品国产99精品国产亚洲性色 | 精品国内亚洲2022精品成人| 黄色女人牲交| 久久久久国产一级毛片高清牌| 老汉色∧v一级毛片| 日日干狠狠操夜夜爽| 人妻丰满熟妇av一区二区三区| 在线视频色国产色| 精品国产一区二区久久| 麻豆成人av在线观看| 久久精品人人爽人人爽视色| 精品一区二区三区av网在线观看| 80岁老熟妇乱子伦牲交| av片东京热男人的天堂| 在线观看免费视频日本深夜| av在线天堂中文字幕 | 久久人人精品亚洲av| 90打野战视频偷拍视频| 在线av久久热| 亚洲va日本ⅴa欧美va伊人久久| 欧美丝袜亚洲另类 | 一夜夜www| 欧美久久黑人一区二区| 亚洲专区中文字幕在线| 免费高清在线观看日韩| 午夜成年电影在线免费观看| 激情视频va一区二区三区| 久久久水蜜桃国产精品网| 夜夜夜夜夜久久久久| a级毛片黄视频| 亚洲av成人av| 最新在线观看一区二区三区| 亚洲一区二区三区不卡视频| av在线播放免费不卡| 久久久久国内视频| 日韩大尺度精品在线看网址 | 天天影视国产精品| av网站免费在线观看视频| 免费在线观看日本一区| 亚洲av成人不卡在线观看播放网| 亚洲精品中文字幕一二三四区| 黑人欧美特级aaaaaa片| 12—13女人毛片做爰片一| 成在线人永久免费视频| 亚洲一区高清亚洲精品| 一本大道久久a久久精品| 久久亚洲精品不卡| 一区在线观看完整版| 国产亚洲精品综合一区在线观看 | 99国产综合亚洲精品| 久久精品91无色码中文字幕| 亚洲av美国av| 久久亚洲真实| 国产片内射在线| 国产一区二区三区视频了| 亚洲精品粉嫩美女一区| 三级毛片av免费| 最新美女视频免费是黄的| 一个人观看的视频www高清免费观看 | 午夜老司机福利片| 18美女黄网站色大片免费观看| 日本a在线网址| 精品熟女少妇八av免费久了| 人人妻人人添人人爽欧美一区卜| 天堂俺去俺来也www色官网| 亚洲精品粉嫩美女一区| 亚洲一卡2卡3卡4卡5卡精品中文| 国产亚洲精品一区二区www| 亚洲一区二区三区色噜噜 | 很黄的视频免费| 韩国av一区二区三区四区| 三上悠亚av全集在线观看| 视频区欧美日本亚洲| 女人精品久久久久毛片| 变态另类成人亚洲欧美熟女 | 一夜夜www| 欧洲精品卡2卡3卡4卡5卡区| 老熟妇乱子伦视频在线观看| 日本精品一区二区三区蜜桃| x7x7x7水蜜桃| 久久精品aⅴ一区二区三区四区| 国产欧美日韩综合在线一区二区| 亚洲一区二区三区不卡视频| 欧美黑人精品巨大| 欧美人与性动交α欧美精品济南到| 岛国在线观看网站| 国产一区二区在线av高清观看| 香蕉国产在线看| 亚洲自偷自拍图片 自拍| 欧美午夜高清在线| 日本撒尿小便嘘嘘汇集6| 在线观看66精品国产| videosex国产| e午夜精品久久久久久久| 侵犯人妻中文字幕一二三四区| 亚洲精品美女久久久久99蜜臀| 国产av又大| 国产成人啪精品午夜网站| 欧美亚洲日本最大视频资源| 免费高清在线观看日韩| 亚洲欧美日韩高清在线视频| 亚洲欧美日韩另类电影网站| 欧美激情 高清一区二区三区| 国产欧美日韩综合在线一区二区| 亚洲欧美日韩高清在线视频| 亚洲第一av免费看| 欧美黑人欧美精品刺激| e午夜精品久久久久久久| 日韩中文字幕欧美一区二区| 日本免费a在线| 午夜久久久在线观看| 国产精品乱码一区二三区的特点 | 天堂动漫精品| 一区在线观看完整版| 99国产精品一区二区三区| 亚洲精品一区av在线观看| 性色av乱码一区二区三区2| 黄色丝袜av网址大全| 国产精品1区2区在线观看.| 精品国产一区二区久久| 国产精品98久久久久久宅男小说| 日韩视频一区二区在线观看| 国产高清videossex| 伊人久久大香线蕉亚洲五| 亚洲国产欧美一区二区综合| 91精品国产国语对白视频| 在线观看免费视频网站a站| 午夜精品久久久久久毛片777| 午夜福利影视在线免费观看| 日韩欧美一区视频在线观看| 波多野结衣一区麻豆| 香蕉久久夜色| 国产蜜桃级精品一区二区三区| 人人妻人人澡人人看| 水蜜桃什么品种好| 亚洲av日韩精品久久久久久密| 国产激情久久老熟女| 亚洲成人久久性| 免费观看人在逋| 久久久久久久久免费视频了| 亚洲成人精品中文字幕电影 | 中出人妻视频一区二区| 不卡一级毛片| 国内毛片毛片毛片毛片毛片| 欧美大码av| 成人三级黄色视频| 免费在线观看黄色视频的| 一二三四在线观看免费中文在| 午夜福利免费观看在线| 国产高清videossex| 欧美+亚洲+日韩+国产| 亚洲人成77777在线视频| 精品乱码久久久久久99久播| 久久香蕉激情| 亚洲国产精品合色在线| 99re在线观看精品视频| 欧美+亚洲+日韩+国产| 国产三级黄色录像| 精品日产1卡2卡| 久久精品91蜜桃| 国内毛片毛片毛片毛片毛片| 久久热在线av| 国产又爽黄色视频| 在线观看日韩欧美| 曰老女人黄片| 久久精品人人爽人人爽视色| 成人18禁高潮啪啪吃奶动态图| 国产男靠女视频免费网站| 国产精华一区二区三区| 国产精品久久久久成人av| 99精品欧美一区二区三区四区| 色综合欧美亚洲国产小说| 午夜视频精品福利| a级毛片在线看网站| 午夜91福利影院| 岛国视频午夜一区免费看| 欧美一级毛片孕妇| 日韩免费av在线播放| 免费看十八禁软件| 久久精品亚洲精品国产色婷小说| 成年人免费黄色播放视频| 搡老乐熟女国产| 国产精品久久久av美女十八| 91麻豆av在线| 在线永久观看黄色视频| 我的亚洲天堂| 成人免费观看视频高清| 黑人操中国人逼视频| 久久中文字幕一级| 老熟妇乱子伦视频在线观看| 日韩高清综合在线| 欧美亚洲日本最大视频资源| 精品熟女少妇八av免费久了| 涩涩av久久男人的天堂| 精品久久久久久成人av| 国产精品免费一区二区三区在线| 看免费av毛片| 大陆偷拍与自拍| 黑人操中国人逼视频| 中文字幕人妻丝袜制服| 国产精品99久久99久久久不卡| 视频在线观看一区二区三区| 久久久久久免费高清国产稀缺| 香蕉国产在线看| 日本免费一区二区三区高清不卡 | www.熟女人妻精品国产| 人人妻人人爽人人添夜夜欢视频| 久久国产亚洲av麻豆专区| 亚洲人成77777在线视频| 午夜福利影视在线免费观看| 母亲3免费完整高清在线观看| 亚洲专区国产一区二区| 久久九九热精品免费| 欧美亚洲日本最大视频资源| 亚洲欧美激情综合另类| 日韩 欧美 亚洲 中文字幕| 黑人猛操日本美女一级片| 亚洲视频免费观看视频| 中文字幕av电影在线播放| 国产精品自产拍在线观看55亚洲| 国产精品久久久久成人av| 免费人成视频x8x8入口观看| 动漫黄色视频在线观看| 日本wwww免费看| 精品久久久久久,| 亚洲男人的天堂狠狠| 亚洲成人国产一区在线观看| 国产精品电影一区二区三区| 国产有黄有色有爽视频| 亚洲精品一卡2卡三卡4卡5卡| www.www免费av| 老司机靠b影院| 精品久久久久久电影网| 午夜a级毛片| 成人18禁高潮啪啪吃奶动态图| 亚洲精品美女久久av网站| 精品高清国产在线一区| 高清黄色对白视频在线免费看| 51午夜福利影视在线观看| 亚洲av片天天在线观看| 日韩av在线大香蕉| 丰满饥渴人妻一区二区三| 操出白浆在线播放| 日韩成人在线观看一区二区三区| 国产av一区在线观看免费| 亚洲熟妇中文字幕五十中出 | 老鸭窝网址在线观看| 天堂俺去俺来也www色官网| 在线观看免费午夜福利视频| 国产在线观看jvid| 成人18禁高潮啪啪吃奶动态图| 国产黄a三级三级三级人| 亚洲色图综合在线观看| 国内毛片毛片毛片毛片毛片| 少妇 在线观看| 久9热在线精品视频| 国产男靠女视频免费网站| 欧美另类亚洲清纯唯美| 99热只有精品国产| 色综合婷婷激情| 高清黄色对白视频在线免费看| 国产亚洲精品久久久久久毛片| 亚洲人成电影免费在线| 亚洲国产毛片av蜜桃av| 正在播放国产对白刺激| 女性生殖器流出的白浆| 欧美国产精品va在线观看不卡| 中亚洲国语对白在线视频| 激情视频va一区二区三区| 国产麻豆69| 巨乳人妻的诱惑在线观看| 伦理电影免费视频| 亚洲中文av在线| 欧美乱色亚洲激情| a在线观看视频网站| 91字幕亚洲| 色综合欧美亚洲国产小说| 日韩欧美三级三区| 咕卡用的链子| 丝袜人妻中文字幕| 伊人久久大香线蕉亚洲五| 精品日产1卡2卡| 人人妻人人爽人人添夜夜欢视频| 午夜91福利影院| 色精品久久人妻99蜜桃| 成人av一区二区三区在线看| 中出人妻视频一区二区| 男女床上黄色一级片免费看| 韩国av一区二区三区四区| 亚洲精华国产精华精| ponron亚洲| 自拍欧美九色日韩亚洲蝌蚪91| 岛国视频午夜一区免费看| 欧美日韩国产mv在线观看视频| 亚洲黑人精品在线| 麻豆久久精品国产亚洲av | 一区二区日韩欧美中文字幕| 国产精品一区二区免费欧美| 久久中文字幕一级| 中文亚洲av片在线观看爽| 精品无人区乱码1区二区| 欧美黑人精品巨大| 国产1区2区3区精品| 男女下面插进去视频免费观看| 亚洲精品中文字幕在线视频| 老司机午夜福利在线观看视频| 精品一品国产午夜福利视频| 脱女人内裤的视频| 亚洲国产精品一区二区三区在线| 免费在线观看影片大全网站| 亚洲欧美精品综合久久99| 精品乱码久久久久久99久播| av电影中文网址| 18禁裸乳无遮挡免费网站照片 | 多毛熟女@视频| 精品国产美女av久久久久小说| 99香蕉大伊视频| 国产日韩一区二区三区精品不卡| 欧美激情 高清一区二区三区| 制服人妻中文乱码| 亚洲,欧美精品.| 怎么达到女性高潮| 久久精品影院6| 丁香六月欧美| 亚洲精品一二三| 亚洲在线自拍视频| 国产精品一区二区三区四区久久 | 精品高清国产在线一区| av有码第一页| 精品卡一卡二卡四卡免费| 黑丝袜美女国产一区| 亚洲精品久久午夜乱码| 欧美性长视频在线观看| 90打野战视频偷拍视频| 久久久久久久久中文| 国产一区二区三区在线臀色熟女 | 黑人欧美特级aaaaaa片| 天天影视国产精品| 亚洲成a人片在线一区二区| 久久久国产欧美日韩av| 久久久国产成人精品二区 | 老司机午夜福利在线观看视频| 国产成人系列免费观看| 婷婷丁香在线五月| 精品人妻在线不人妻| 久久影院123| 亚洲久久久国产精品| av国产精品久久久久影院| 午夜福利在线免费观看网站| 美国免费a级毛片| 18禁美女被吸乳视频| 亚洲午夜精品一区,二区,三区| 日韩大码丰满熟妇| 超碰97精品在线观看| 99香蕉大伊视频| 日本五十路高清| 精品欧美一区二区三区在线| 国产蜜桃级精品一区二区三区| xxx96com| 久久久国产精品麻豆| 日韩国内少妇激情av| 电影成人av| 老汉色∧v一级毛片| aaaaa片日本免费| 一级a爱片免费观看的视频| 在线播放国产精品三级| 男女床上黄色一级片免费看| 看黄色毛片网站| 国产成人精品久久二区二区91| 国产乱人伦免费视频| 久久九九热精品免费| 天天躁夜夜躁狠狠躁躁| 十分钟在线观看高清视频www| 亚洲久久久国产精品| 手机成人av网站| 久久久精品国产亚洲av高清涩受| 18禁国产床啪视频网站| 热99国产精品久久久久久7| 欧美日韩一级在线毛片| 精品人妻在线不人妻| 好看av亚洲va欧美ⅴa在| 久久九九热精品免费| 精品国产一区二区三区四区第35| 久久久久久久久中文| 国产精品偷伦视频观看了| 国产xxxxx性猛交| 久久久久久久久中文| 成人手机av| 精品一品国产午夜福利视频| 91在线观看av| 波多野结衣一区麻豆| 婷婷六月久久综合丁香| 好看av亚洲va欧美ⅴa在| 国产熟女午夜一区二区三区| 黄色女人牲交| 日韩免费高清中文字幕av| 午夜精品久久久久久毛片777| av视频免费观看在线观看| 高清欧美精品videossex| 欧美日韩一级在线毛片| 国产一卡二卡三卡精品| 欧美精品啪啪一区二区三区| 日本wwww免费看| 99精品欧美一区二区三区四区| 老司机在亚洲福利影院| av片东京热男人的天堂| 日本黄色日本黄色录像| 黑人巨大精品欧美一区二区mp4|