• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Cross‐domain sequence labelling using language modelling and parameter generating

    2022-12-31 03:48:32BoZhouJianyingChenQianhuaCaiYunXueChiYangJingHe
    關(guān)鍵詞:閩粵老火藥膳

    Bo Zhou|Jianying Chen|Qianhua Cai|Yun Xue|Chi Yang|Jing He

    1School of Electronics and Information Engineering,South China Normal University,Foshan,China

    2Department of Neroscience,University of Oxford,Oxford,Oxfordshire,Britain

    Abstract Sequence labelling(SL)tasks are currently widely studied in the field of natural language processing.Most sequence labelling methods are developed on a large amount of labelled training data via supervised learning,which is time‐consuming and expensive.As an alternative,domain adaptation is proposed to train a deep‐learning model for sequence labelling in a target domain by exploiting existing labelled training data in related source domains.To this end,the authors propose a Bi‐LSTM model to extract more‐related knowledge from multi‐source domains and learn specific context from the target domain.Further,the language modelling training is also applied to cross‐domain adaptability facilitating.The proposed model is extensively evaluated with the named entity recognition and part‐of‐speech tagging tasks.The empirical results demonstrate the effectiveness of the cross‐domain adaption.Our model outperforms the state‐of‐the‐art methods used in both cross‐domain tasks and crowd annotation tasks.

    1|INTRODUCTION

    Linguistic sequence labelling(SL)is one of the classic tasks in natural language processing(NLP)whose purpose is to assign a label to each unit in a sequence and thus map a sequence of observations to a sequence of labels[1].As a pivot step in the most NLP applications,SL is widely applied to numerous real‐world issues including but not limited to part‐of‐speech(POS)tagging[2],named entity recognition(NER)[3]and word segmentation[4,5].Basically,SL algorithms exploit the manually‐labelled data,based on which effective approaches are required for leveraging distinctive features from limited information[6].Previous studies employing the supervised learning models mainly depend on high‐quality annotations of data[7].In such studies,the collection of annotated data is both time‐consuming and expensive.More recently,there is an ongoing trend to apply deep‐learning‐based models to detect the distinctive feature in SL tasks[8].Specifically,the recurrent neural network(RNN)is both creative and practical in obtaining the long‐term relation within sequential structure[2].

    More recently,the SL of cross‐domain is pronounced.Ongoing studies report that parameter transferring and language model show their superiorities in cross domain adaptation[9].On the one hand,parameter transferring contains parameter sharing and generating.By using trained models,the former aims to transfer the shared information from the source domain to target ones[10],while the latter result in a variety of learnable parameters for information extraction across different domains[11].On the other hand,LM is capable of capturing both the targets and the context patterns during training[12].As reported in Ref.[13],on the task of named entity recognition(NER),the transferring of knowledge is performed by contrasting large raw data in both domains through cross‐domain LM training.Besides,the effectiveness of attention mechanisms is also highlighted due to its dynamically aggregating the specific knowledge among sources[7].

    Notwithstanding,the use of the aforementioned approaches are still limited mainly because there is a significant drop for domains of big differences.That is,current methods fail to apply the trained model to the target domain whenever the shared knowledge is absent.In spite of the adaptation into one single target domain,the knowledge transferring to a variety of new domains is still challenging.Furthermore,while restricted to modelling training in a source domain,the learning of specific knowledge within the target domain has the potential to benefit the SL tasks as well.

    There is a considerable gap between effectively delivering the trained model to multi‐domains and the state‐of‐the‐art results.In this work,we tend to approach this problem in two ways,that is,extracting shared information from multi‐source domains and learning specific knowledge from the target domain.In this way,based on the Bi‐LSTM model,an accurate and efficient approach for SL among distinguishing domains is designed and deployed.Typically,our contributions are threefold:

    (1)Parameter sharing is conducted via the model training process within multi‐tasks and multi‐domains.Besides,the model parameters are specifically selected in line with the context of the target domain.

    (2)LM is dedicatedly established and performed as the pretraining task of the model,which aims to learn the contextual information and domain‐specific knowledge.

    (3)The attention mechanism is applied to collect the domain‐specific knowledge from different source domains.In this way,more relevant information from source domains is conveyed to the target domain.

    2|RELATED WORK

    2.1|Sequence labeling(SL)

    Classical methods for SL are generally linear statistical models,including Hidden Markov Models(HMMs)[14],Maximum Entropy Markov Models(MEMMs)[15],Conditional Random Fields(CRFs)[16]etc.All these models heavily rely on the hand‐crafted features and task‐specific resources.In contrast,the deep neural networks facilitate the process by automatically extracting features from raw text via model training.Both CNN‐and RNN‐based models are built up to deal with such issues.Zhao et al.[1]propose a Deep Gated Dual Path CNN architecture to capture a large context through stacked convolutions.Yang et al.[10]devise a transfer learning approach based on a hierarchical RNN that exploits the information from different lingual/domain data sets by sharing multi‐task model parameters.As a commonly‐used RNN variant,long short‐term memory(LSTM)is widespread in existing studies.With the integration of CRF,BiLSTM‐based methods are deemed best able to achieve state‐of‐the‐arts performance across various SL tasks[2,6,17].

    2.2|Parameter transferring

    Generally,the main purpose of parameter transferring is to improve the performance on a target task by joint training with a source task.Parameter sharing,well‐known for its use in across‐languages/domains/applications,has an impressive performance in tasks with fewer available annotations[10,18].With the combination of domain properties,not only the domain specific parameters but also the representation vectors can be derived[19].Nevertheless,multi‐task learning based on parameter sharing has the limitation in parameter setting due to potential conflict of information[13].In an effort to mitigate this deficiency,parameter generating is one such direction,with previous publications exploring its feasibility and validating its efficacy.Platanios et al.[11]devise a contextual parameter generator(CPG)that generates the parameters for the encoder and the decoder of a neural machine translation system.On the task of NER,Jia et al.[13]propose a parameter generating network of deep‐learning models to transfer knowledge across domains and applications.

    2.3|Language model

    So much is the significance of training the model that language models(LMs)are employed in neural networks to obtain specific representations in multi‐tasks[20].Liu et al.[21]construct a task‐aware neural LM termed as LM‐LSTM‐CRF,which incorporates character‐aware neural LM to extract character‐level embedding.Liu et al.[22]propose a way of compressing LM as a module of RNN.Accordingly,the LM for preserving useful information with respect to specific tasks can be applied to cross‐domain SL tasks.

    3|OUR APPROACH

    3.1|Problem definition

    For a given sentenceSi=[Si,1,Si,2,…,Si,n]from multi‐source domains,we takeskas thekth source domain andstas the target domain.Let Θ=(W?Ik)(k=1,2,…,p)be the parameters involved in each domain whereWstands for the shared parameters andIkfor the domain‐specific parameters and?is the tensor contraction processing.That is,the parameters for every source domain are the integration ofWandIk.Similarly,in the target domain,we have the parameters composing ofW,Ilmtandwhereis pretrained from the language model task andItslis the outcome of integratingIkand the target‐domain contexts.Figure 1 presents the framework of the multi‐domain SL task,which is the base of our model.

    3.2|Model establishing

    Figure 2 shows the architecture of our model.For each input sentence,the proposed model firstly maps the words into word embeddings via the shared input layer.Subsequently,a Bi‐LSTM is employed for context encoding and a parameter generator is taken to resolve the parameters of Bi‐LSTM within the parameter generating layer.Specifically,for the inputs of SL tasks,the attention mechanism is also applied to parameter generating.Lastly,the outcomes are sent to different tasks in line with the processing in the output layer.

    FIGURE 1 Illustration of using the multi‐source domain model in SL.SL represents sequence labelling tasks and LM represents language modelling

    FIGURE 2 Model overview

    The following sections describe each component of the proposed model in more detail.

    3.3|Shared embedding layer

    We define a sentence collectionS=[S1,…,Si,…,Sm]with the corresponding label

    FIGURE 3 Concatenation principle of word embedding and character embedding.Module input:a word from a given sentence;module output:concatenation of word embedding and character embedding

    that comes from the target domain and of SL tasks.Letbe the input sentence with the corresponding label.On the task of SL,the input from thek(k=1,2,…,p)‐th source domain is defined aswhile that of the target domain is.Similarly,the related raw text in thekth source domain isand that in the target domain is given as.

    As shown in Figure 3,in this layer,a convolutional neural network(CNN)is performed to extract the character‐level features from the input sequence,whose outputs are concatenated with the word embeddings.Accordingly,we have the layer output as follows:

    wherevi,jis the word representation of wordsi,j,ewis a shared word embedding lookup table andecis the specific shared character embedding lookup table.

    3.4|Parameter generating layer

    The parameter generating layer is constructed on the basic Bi‐LSTM,which aims to transfer knowledge across domains via parameter transferring.In addition to the shared parameter,a parameter generator is devised for Bi‐LSTM parameter determination.Figure 4 shows the parameter transferring scheme together with its principle.Concretely,each input sentence can be classified into the following category based on the type of task and the specific domain:

    If a sentence is of either the LM task or SL task from the source domain(s),the Bi‐LSTM parameter set is generated directly in the parameter generator.

    If a sentence is of the LM task from the target domain,the Bi‐LSTM parameters are generated directly in the parameter generator.

    If a sentence is of the SL task from the target domain,the Bi‐LSTM parameters are generated with the parameter generator and attention.

    Basically,for a sentence from the source domain,the parameter setapplied to Bi‐LSTM is delivered as follows:

    FIGURE 4 Parameter generation principle.Module input:a sentence of the target domain or multi‐source domains;module output:long short‐term memory(LSTM)hidden state of each word in the sentence

    whereWstands for the shared parameter withW?RP(LSTM)×UandIkindicates that from thekth source domain withIk?RU;P(LSTM)is the total number of parameters;?refers to processing of tensor concatenation.

    With the word representationvi,jfrom the shared embedding layer sent to the Bi‐LSTM,the outputs of hidden states of both forward and backward directions are derived as follows:

    Similarly,for the sentence of the LM task from the target domain,the parameters of Bi‐LSTM is obtained in the same process as presented in Equation(2)‐(4).Notably,this LM task is considered as a pretraining step of SL,whose parameter set can be further exploited in SL tasks.

    In terms of input of the SL task from the target domain,the attention module is carried out to facilitate the parameter generating(see Figure 5).Based on the pretraining from LM,we have the representation of sentencesiby concatenating the last hidden states of forward LSTM and backward LSTM,which is

    At this stage,the target domain‐specific sentence representations,as well as the source domain parameterIk,are fed into the attention module to obtain a normalised weight for each source domain.That is,the attentive weightαkiof each source domain denoting the source‐domain knowledge can be conveyed as

    FIGURE 5 Attention module in parameter generating layer.Module input:sentence embedding and parameters of multi‐source domains;module output:Target domain‐specific long short‐term memory(LSTM)parameter of each sentence

    where

    Computation of target‐domain parameter for the SL task is facilitated by using the source‐domain attention weight:

    As presented in Equation(8),more information from source domains is integrated intoof the target domain.Correspondingly,the parameter set of Bi‐LSTM,which refers to the SL task of the target domain,is obtained,that is,

    Notably,computing of attention weights is eliminated with respect to one single source domain.As such,the LSTM parameters of the target domain are obtained by integrating shared parameters and the dataset parameters,which is given as

    Subsequently,for the Bi‐LSTM unit,the parameterand the inputvi,jare taken to resolve the hidden‐state outputs.We have indeed

    3.5|Output layer

    As mentioned above,for both source domains and the target domain,the input sentences are of either the LM task or SL task.Specifically,we employ the CRFs to process sentences of SL and Negative Sampling Softmax(NSSoftmax)to those of ML as presented in Figure 6.Each component is depicted as follows:

    CRFs[13]:In terms of the SL tasks,the output of the parameter generating layer is the concatenating of hidden states from forward and backward LSTM, that is,with the corresponding label sequenceyi=yi,1,…,yi,j,…,yi,n.The probability on labelyiis defined as follows:

    whererefers to an arbitrary label sequence;is the weight parameter with respect toyi,j;is the bias toyi,jandyi,j?1.Notably,the first‐order Viterbi algorithm is taken to extract the highest scored label sequence,while the proposed CRF is shared across the source and target domains.

    NSSoftmax[13]:In line with the Bi‐LSTM model,the forward hidden states and the backward hidden states are applied to a forward‐LM and a backward‐LM,respectively.Given the forward LSTM hidden statethe probability of the next wordsi,j+1onsi,1:jcan be computed by using the NSSoftmax:

    FIGURE 6 Output layer.Module input:long short‐term memory(LSTM)hidden states;Conditional Random Field(CRF)output:sequence labelling(SL)outcomes of words sentence;NSSoftmax output:probability on word in the sentence

    and

    where#si,jis the vocabulary index of the target wordsi,j,WTis the transpose of the corresponding target word vector andb#the target word bias.In Equation(13),Zstands for the normalisation item,andNsi,jis the negative sample set ofsi,j.Each element in the set is a random number from 1 to the cross‐domain vocabulary size.

    Similarly,the probability of the prior wordsi,j?1onsi,j:ncan be obtained from the forward LSTM hidden statewhich is

    3.6|Model training

    Since the SL of the target domain is considered as zero‐resource learning with cross‐domain adaptation,the training processes are carried out on SL of source domains and LM on both target domain and source domains.

    Sequence labelling of source domains:For a labelled dataset,the negative log‐likelihood loss is conducted for model training,which is

    LM of source domains and the target domain:Given a labelled datasetfrom source domains,the forward‐and backward‐LM are jointly trained by using NSSoftmax,whose loss function is expressed as

    可見(jiàn),老火湯雖深受中醫(yī)食療思想的影響,但并不等同于藥膳。那么,“老火湯”是何時(shí)開(kāi)始成為廣東居民的日常飲食呢?徐珂《清稗類鈔》曾載曰:“(閩粵人)餐時(shí)必佐以湯。”[注](清)徐珂編撰:《清稗類鈔13》,中華書局,1986年,第6242頁(yè)。即至少在晚清時(shí)期,粵人已經(jīng)形成了吃飯喝湯的飲食習(xí)慣。但結(jié)合其他文獻(xiàn)來(lái)看,清代粵人日常所食之湯應(yīng)非“老火湯”類的湯品。

    In such a manner,for a datasetfrom a target domain,the loss function is

    In most cases,we jointly train the SL and LM on both target domain and source domains.The overall loss is denoted as

    whereλtis the task weight of LM,λis the weight ofL2regularisation and Θ stands for the parameters set.

    4|EXPERIMENT

    In line with the purpose of SL tasks,we conduct two kinds of evaluations,one for evaluating the cross‐domain adaptability of the proposed model in NER and POS tagging tasks,the other for evaluating the working performance in NER via the training using crowd‐sourced data.As mentioned above,the crowd annotation task has only one source dataset,whose target domain parameters are generated without calculating the attention weights.

    4.1|Dataset

    Cross‐domain dataset:On the task of cross‐domain adaptability evaluating,both OntoNotes v5[7,23]and Universal Dependencies(UD)v2.3[7,24]are taken as the datasets for the cross‐domain sequence label.

    OntoNotes v5[23]:The OntoNotes v5 is generally applied to NER tasks.In this experiment,the English part of Onto-Notes v5,which involves 9 named entities and 6 domains,is selected.Specifically,the 6 domains are broadcast conversation(BC),broadcast news(BN),magazine(MZ),newswire(NW),telephone conversation(TC),and web(WB).

    Universal Dependencies(UD)v2.3[24]:The GUM part of UD v2.3 is used in the POS tagging task.This dataset is annotated with 17 tags and 7 fields.The 7 fields are the following:academic,bio,fiction,news,voyage,wiki,and interview.

    Details of each cross‐domain dataset is exhibited in Table 1.

    Crowd‐Annotation Datasets:We use the crowd‐annotation datasets[7]based on the 2003 CoNLL dataset[25],while Amazon’s Mechanical Turk(AMT)[26]as the testing set for the NER task.Statistics of the datasets in this experiment are shown in Table 2.

    Similar to Ref.[7],all the sentences as well as the corresponding entities are selected for further processing.All the datasets are subdivided into train sets,development sets and test sets(see Table 3).

    4.2|Hyperparameter settings

    In this experiment,the NCRF++is taken as the basic model for a variety of tasks[27].The dimension of character‐level embedding,word‐level embedding and the LSTM hidden layer are set to 30,100,and 200,respectively,while the training episode is set to 100 epochs.All word embeddings are initialised using 100‐dimensional word vectors pretrained by Glove[28]and fine‐tuned during training.The character embeddings are obtained via random initialisation[13].The batch sizes of cross‐domain NER and cross‐domain POS tagging are 10 and 20,respectively,with the learning rate of 0.01 using RMSprop.We use domain‐specific parameters of size 200.

    4.3|Baseline models

    To comprehensively evaluate the performance of the proposed model,4 baseline methods are taken for comparison in NER tasks.For all the baseline models,the dimension of character‐level,word‐level embeddings and the LSTM hidden layer are set to 30,100 and 150,respectively.In addition,model parameters are updated by stochastic gradient descent(SGD).The learning rate is initialised as 0.015 and decayed by 5% of each epoch.

    MTL‐MVT[29]:Multi‐source data is sent to a multi‐task learning model with parameter sharing among different tasks.The predicted labels from multi‐sources are voted,aggregated and applied to the target domain.

    MTL‐BEA[30]:Based on a probabilistic graphical model,a transfer model and a generative model is established.Correspondingly,the transition probability is computed and the label for the target domain is thus generated.

    Co‐NER‐LM[13]:The cross‐domain‐adaption is performed using cross‐domain LM while the knowledge transfer is carried out by designing a parameter generating network.

    TABLE 1 Statistics of cross‐domain datasets

    TABLE 2 Statistics of crowd‐annotation datasets

    TABLE 3 Dataset subdivision

    MULTI‐TASK+PGN[13]:A parameter generation network is developed to generate the parameters of LSTM from both the source domain and target domain.

    ConNet[7]:Representation of each source is learnt from annotation of multiple sources.In addition,a context‐aware attention module is exploited to dynamically aggregate source‐specific knowledge.

    BERT‐BASE[31]:The pretrained BERT is employed to extract contextual information while the parameters are fine‐tuned on specific tasks.

    For the purpose of model optimisation,our method is trained using the crowd‐sourced data,whose effectiveness is verified in the NER tasks as well.On this occasion,6 crowd‐sourced‐training‐models are taken as the baselines,which are the following:

    Crowd‐Add[32]:An LSTM‐Crowd model is devised where crowd components are element‐wise added to tags scores.

    Crowd‐Cat[32]:In a LSTM‐Crowd‐based‐model,the crowd vectors are concatenated to the outputof the LSTM hidden layer.

    MVT‐SLM[7]:The majority voting,based on the crowd annotation data,is conducted on the token level.Thus,the majority of labels is selected as the gold label for each token.

    MVS‐SLM[7]:Similar to MVT,the majority voting is at the sequence level.

    CRF‐MA[26]:A probabilistic approach is devised for the sequence label using CRFs with data from multiple annotators,which relies on a latent variable model where the reliability of the annotators are handled as latent variables.

    CL‐MW(MW)[33]:A crowd layer is integrated into a CNN model to learn the weight matrices.The trained model is applied to label prediction.

    4.4|Results

    Evaluation on cross‐domain adaptability:The experimental results of our model compared to the baseline methods are shown in Table 4 and Table 5.The working performance of cross‐domain NER tasks is demonstrated by F1 score while that of cross‐domain POS tagging by accuracy.Among all these methods,the proposed model produces results competitive with the edge‐cutting ConNet model.For the cross‐domain NER tasks,our model obtains the highest F1 score on the evaluation settings of tasks of WB,TC and BC(see Table 4).On the other hand,Table 5 shows that our model achieves the best average accuracy,which outperforms ConNet 0.07%;the main reason is that the use of the attention mechanism aggregates shared knowledge from multi‐sources and thus eliminates the discrepancy among domains.In addition,the domain‐specific knowledge is also obtained via LM.By contrast,both MULTI‐TASK+PGN and Co‐NER‐LM use the LM as a bridge,but ignore the discrepancy among different domains.Moreover,MTL‐MVT is constructed on the basis of cross‐domain parameter sharing and MTL‐BEA exploits the probabilistic graphical model to predict the domain‐specific knowledge,whereas both of these models fail to makefull use of the knowledge from source domains and model the difference among domains.Since our model employs the parameter transferring as well as LM training,it is reasonable to expect better performance in different target domains,as it is the case.

    TABLE 4 Experimental results of cross‐domain named entity recognition(NER)

    TABLE 5 Experimental results of cross‐domain part‐of‐speech(POS)tagging

    However,in comparison with the state‐of‐the‐art methods,our method fails to exceed the performance of the best methods in all the cross‐domain adaptability evaluation.According to Table 4,the accuracy of our model is not as high as that of ConNet in three evaluation sets.Pretraining using LM not just learns domain‐specific knowledge but also introduces unrelated domain information,which results in the drop of accuracy.Another possible explanation for this is that the multi‐domain transferring largely depends on the selecting of the source domain.Taking the NER task as an example,we focus on analysing the error by removing a specific domain every time from the current source domains.The results are presented in Figure 7.In some cases,the target domain and the source domain are of close relation.For instance,both BN and BC concern the broadcasting while BN and NW relate to journalism.In this way,for the target domain of either BC or NW,the F1 score decreases substantially due to the removal of BN.According to Figure 7,the working performance separately drops by 3.62% and 3.25% for target domains BC and NW without the source domain BN,which is significant.By contrast,for domains of little association,for example,TC and BN(the former contains a large number of colloquial expressions while the latter involves formal statements),there exist less impact on each other.As an example,for the target domain BN,the F1 score even improves by 0.45% while removing the source domain TC.We shall thus infer that the close connection between the target domain and source domain will effectively decrease the recognition error and facilitate the cross‐domain adaptation.

    FIGURE 7 Drop of F1 by removing one source domain

    Furthermore,the attention weight between any two different domains of the dataset OntoNotes v5 is investigated(Figure 8).Similar to the aforementioned inference,the domains of a closer relation make a greater contribution to each other in the NER task.One can easily see that the highest two attentive scores are generated between BN and NWas well as BC and BN,which conforms to our analysis of the recognition error.

    Evaluation on crowd‐annotation training:The working performance of our model and the baselines on the real‐world dataset AMT is exhibited in Table 6.There is a considerable gap of F1‐score between our model and the other 10 methods.The minimum performance gap of 2.81% is observed against the Co‐NER‐LM model.In this way,our model presents the superiority in learning from the noisy annotation data.Typically,most widely‐applied models obtain a comparatively high precision but a low recall.This issue is on account of the deficiency in capturing the entity information of the target domain from multiple annotators.Note that only one source domain is used in the crowd‐annotation task,and the application of domain‐specific knowledge is no longer superior to the baselines.In addition,the crowd‐annotation dataset contains a certain amount of noise that affects the training results.Comparing with the baselines,the LM of our method cannot just learn the contextual information but also filter the noise from samples.As such,a higher recall as well as a higher F1‐score,is accessible.Clearly,our model is a better alternative to the state‐of‐the‐art methods.

    FIGURE 8 Attention weight between different domains.The vertical axis represents the target domain and the horizontal axis represents the source domain.The value on the right side stands for the attention weight

    TABLE 6 Experimental results of real‐world crowd‐sourced named entity recognition(NER)

    4.5|Ablation experiment

    In order to determine the importance of the different components in our model,an ablation study is carried out on cross‐domain tasks.Our model is taken as the baseline.Specifically,MULTI‐TASK‐without LM represents the removal of LM from the model,which is trained in the source domain using SL tasks,with parameter generating via the attention mechanism;MULTI‐TASK‐without attention stands for the ablation of the attention network while the model is trained based on SL and LM tasks;STM‐without LM andAttention denotes that the model is trained in the source domain only using SL tasks,without LM and attention mechanism.

    According to Table 7,the removal of the attention mechanism and LM results in the F1 decline of 2.50% and 2.86% on average in NER tasks.Likewise,for the POS task,the accuracy drop of MULTI‐TASK‐without LM and MULTI‐TASK‐without attention is 1.25%and 1.18%,respectively(see Table 8).The STM‐without LM&Attention model has the worst results in all evaluation settings.For NER task,since LM is more capable of learning domain‐specific knowledge and contextual information,the average F1 of MULTI‐TASK‐without attention is slightly higher than that of MULTI‐TASK‐without LM.By contrast,the POS information is irrelevant to the domain.Thus,the contribution of LM and the attention mechanism is comparable in POS tasks.

    4.6|Case study

    To further verify the capability of the proposed model,it is advisable to visualise the method presentation.In this case,three sentences are selected and applied to the NER task.The proposed model with and without LM,as well as ConNet are taken for comparison.According to Figure 9,both ConNet and our model captures all the nouns and predicts their labels successfully.By contrast,the proposed model without LM misidentifies the entities‘Soviet’,‘West’and‘Indies’.In line with the real label,all these entities in the three sentences can be precisely recognised,which indicates the effectiveness of our model in SL tasks.

    TABLE 7 Ablation study in the cross‐domain named entity recognition(NER)task

    TABLE 8 Ablation study in cross‐domain part‐of‐speech(POS)task

    FIGURE 9 An example of sentence prediction results.B–beginning of the entity,I–intermediate of the entity.O–not an entity,MISC–other entity,PER–personal name,LOC–location name,and ORG–organisation name

    5|CONCLUSION

    In this work,we establish a Bi‐LSTM based‐architecture in SL,which integrates the parameter transferring principle,attention mechanism,CRF and NSSoftmax.Despite the discrepancy of information distribution among domains,the proposed model is capable of extracting more‐related knowledge from multi‐source domains and learning specific context from the target domain.With the LM training,our model thus shows its distinctiveness in cross‐domain adaptation.Experiments are conducted on NER and POS tagging tasks to validate that our model stably obtains a decent performance in cross‐domain adaptation.In addition,with the training of crowd‐annotation,the experimental results for NER are further improved,indicating the effectiveness of learning from noisy annotations for higher‐quality labels.

    ACKNOWLEDGMENTS

    This work was supported by the National Statistical Science Research Project of China under Grant No.2016LY98,the Science and Technology Department of Guangdong Province in China under Grant Nos.2016A010101020,2016A010101021 and 2016A010101022,the Characteristic Innovation Projects of Guangdong Colleges and Universities(No.2018KTSCX049),and the Science and Technology Plan Project of Guangzhou under Grant Nos.202102080258 and 201903010013.

    CONFLICT OF INTEREST

    The authors declare no conflict of interest.

    DATA AVAILABILITY STATEMENT

    Data that support the findings of this study are available from the corresponding author upon reasonable request.

    ORCID

    Bo Zhouhttps://orcid.org/0000-0001-8097-6668

    猜你喜歡
    閩粵老火藥膳
    閩粵聯(lián)網(wǎng)工程有源濾波器設(shè)計(jì)方案及其二次控保系統(tǒng)研究
    可緩解干咳的兩款藥膳
    能讓你越喝越瘦的藥膳方
    濕氣不可怕,藥膳起居健康行
    EATING MEDICINE
    六安瓜片烘焙機(jī)械與傳統(tǒng)工藝的比較
    從中法身稅交涉看近代旅越閩粵商幫的利益訴求與歷史演變
    改善消化不良的兩款藥膳
    喝老火湯不講“老”
    魯閩粵瓊政府工作報(bào)告最亮點(diǎn)
    大话2 男鬼变身卡| 老司机影院成人| 最近最新中文字幕免费大全7| 制服诱惑二区| 永久免费av网站大全| 精品国产乱码久久久久久男人| 另类亚洲欧美激情| 新久久久久国产一级毛片| 久久久久久久大尺度免费视频| 97精品久久久久久久久久精品| 美国免费a级毛片| 久久韩国三级中文字幕| 黄片播放在线免费| 久久99热这里只频精品6学生| 人人澡人人妻人| 中国国产av一级| 一区二区三区四区激情视频| 欧美日韩视频高清一区二区三区二| 色哟哟·www| 丰满乱子伦码专区| 一级毛片电影观看| 久久精品熟女亚洲av麻豆精品| 18禁观看日本| 亚洲精品中文字幕在线视频| 欧美成人精品欧美一级黄| 高清不卡的av网站| 中文字幕最新亚洲高清| 亚洲av电影在线观看一区二区三区| 人人澡人人妻人| 久久午夜福利片| 色播在线永久视频| xxxhd国产人妻xxx| 午夜福利视频精品| 2021少妇久久久久久久久久久| 午夜激情av网站| 亚洲精品中文字幕在线视频| 我要看黄色一级片免费的| 日产精品乱码卡一卡2卡三| 免费高清在线观看视频在线观看| 国产精品无大码| 免费播放大片免费观看视频在线观看| 美女国产视频在线观看| 欧美日韩亚洲国产一区二区在线观看 | 亚洲国产av新网站| 日韩人妻精品一区2区三区| 超色免费av| 精品一区二区三区四区五区乱码 | 99热全是精品| 免费黄网站久久成人精品| 久久久国产欧美日韩av| 久久毛片免费看一区二区三区| 国产免费现黄频在线看| 制服丝袜香蕉在线| 2022亚洲国产成人精品| 少妇人妻 视频| 国产视频首页在线观看| 啦啦啦中文免费视频观看日本| 中文乱码字字幕精品一区二区三区| 成人二区视频| www.自偷自拍.com| 老女人水多毛片| 国产精品国产av在线观看| 中文字幕人妻丝袜一区二区 | 成人免费观看视频高清| 午夜福利网站1000一区二区三区| 大香蕉久久成人网| 爱豆传媒免费全集在线观看| 久久久久人妻精品一区果冻| 免费观看无遮挡的男女| av女优亚洲男人天堂| 精品少妇一区二区三区视频日本电影 | 中文欧美无线码| 少妇人妻 视频| 黑人猛操日本美女一级片| 91在线精品国自产拍蜜月| 亚洲精品日本国产第一区| 国产成人欧美| 少妇精品久久久久久久| 欧美激情 高清一区二区三区| 国产高清不卡午夜福利| 国产精品国产三级专区第一集| 日韩大片免费观看网站| 亚洲av电影在线观看一区二区三区| 老司机亚洲免费影院| 久久久久久人妻| 国产不卡av网站在线观看| 中文字幕人妻丝袜制服| 国产精品国产三级专区第一集| 七月丁香在线播放| 久久午夜综合久久蜜桃| 最近手机中文字幕大全| 国产精品蜜桃在线观看| 两个人免费观看高清视频| 五月开心婷婷网| 欧美日韩成人在线一区二区| 亚洲精品乱久久久久久| 91在线精品国自产拍蜜月| 欧美国产精品va在线观看不卡| 超碰97精品在线观看| 免费少妇av软件| 午夜福利,免费看| 国产精品.久久久| 精品国产一区二区三区久久久樱花| 男女边摸边吃奶| 爱豆传媒免费全集在线观看| 久久综合国产亚洲精品| 精品久久久精品久久久| 丰满饥渴人妻一区二区三| 男人添女人高潮全过程视频| 国产精品.久久久| 最新的欧美精品一区二区| 少妇人妻 视频| 日本色播在线视频| 亚洲第一青青草原| 日韩在线高清观看一区二区三区| 精品人妻熟女毛片av久久网站| 亚洲成av片中文字幕在线观看 | h视频一区二区三区| 色播在线永久视频| 亚洲av.av天堂| 亚洲av欧美aⅴ国产| 国产免费现黄频在线看| 国产精品国产av在线观看| 女人被躁到高潮嗷嗷叫费观| 国产爽快片一区二区三区| 国产在线视频一区二区| 老鸭窝网址在线观看| 国产精品无大码| 免费不卡的大黄色大毛片视频在线观看| 久久久精品94久久精品| 国产免费视频播放在线视频| 五月天丁香电影| 久久久久久久国产电影| 午夜福利一区二区在线看| 国产精品一区二区在线观看99| 1024视频免费在线观看| 菩萨蛮人人尽说江南好唐韦庄| 国产精品秋霞免费鲁丝片| 黑人猛操日本美女一级片| 男男h啪啪无遮挡| 最近2019中文字幕mv第一页| 女人被躁到高潮嗷嗷叫费观| 秋霞伦理黄片| videosex国产| 老鸭窝网址在线观看| 精品久久久久久电影网| 国产一区二区 视频在线| 亚洲av免费高清在线观看| 亚洲成人一二三区av| 少妇精品久久久久久久| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 久久毛片免费看一区二区三区| 国产乱人偷精品视频| 久久这里有精品视频免费| 久久久久久久久久久免费av| 国产黄色免费在线视频| 国产成人免费观看mmmm| 人人澡人人妻人| 五月天丁香电影| 99久久人妻综合| 赤兔流量卡办理| 99国产综合亚洲精品| 如日韩欧美国产精品一区二区三区| xxx大片免费视频| 国产精品久久久久久av不卡| 亚洲精品成人av观看孕妇| 免费黄频网站在线观看国产| av视频免费观看在线观看| 女人久久www免费人成看片| 成人手机av| av网站在线播放免费| 国产成人免费无遮挡视频| 少妇人妻精品综合一区二区| 极品少妇高潮喷水抽搐| 亚洲,欧美精品.| 在线看a的网站| 国产欧美日韩综合在线一区二区| 国产成人精品婷婷| 黄色怎么调成土黄色| 老司机亚洲免费影院| av线在线观看网站| 不卡视频在线观看欧美| 制服丝袜香蕉在线| 一区二区av电影网| 久久热在线av| 女性被躁到高潮视频| 日日啪夜夜爽| 国产成人精品福利久久| 亚洲av国产av综合av卡| 七月丁香在线播放| av在线播放精品| 97人妻天天添夜夜摸| 精品亚洲成国产av| 高清视频免费观看一区二区| 最近最新中文字幕大全免费视频 | 亚洲国产成人一精品久久久| 欧美变态另类bdsm刘玥| 大陆偷拍与自拍| 欧美日韩精品成人综合77777| 欧美另类一区| 一本大道久久a久久精品| 国产爽快片一区二区三区| 中文字幕人妻丝袜一区二区 | 老鸭窝网址在线观看| 国产麻豆69| 久久热在线av| 日本爱情动作片www.在线观看| 久久久久网色| 国产深夜福利视频在线观看| 久久av网站| 日韩人妻精品一区2区三区| 久久久久视频综合| 91精品国产国语对白视频| 水蜜桃什么品种好| 精品人妻一区二区三区麻豆| 亚洲精品美女久久久久99蜜臀 | 少妇 在线观看| 亚洲国产看品久久| 人体艺术视频欧美日本| 波多野结衣一区麻豆| 高清av免费在线| 高清欧美精品videossex| 精品99又大又爽又粗少妇毛片| 中文字幕人妻丝袜一区二区 | 欧美日韩成人在线一区二区| 欧美精品国产亚洲| 亚洲四区av| 亚洲少妇的诱惑av| 少妇 在线观看| 秋霞伦理黄片| 男女免费视频国产| 十分钟在线观看高清视频www| 免费观看无遮挡的男女| 中文字幕亚洲精品专区| 亚洲精品一二三| 一级片免费观看大全| 亚洲精品乱久久久久久| 精品卡一卡二卡四卡免费| 国产精品蜜桃在线观看| 尾随美女入室| 日韩av在线免费看完整版不卡| 中文精品一卡2卡3卡4更新| 好男人视频免费观看在线| 免费看av在线观看网站| kizo精华| 亚洲av中文av极速乱| 夫妻午夜视频| 不卡av一区二区三区| 亚洲国产欧美日韩在线播放| 久久精品国产亚洲av涩爱| 如日韩欧美国产精品一区二区三区| 91久久精品国产一区二区三区| 不卡av一区二区三区| 韩国精品一区二区三区| 黄色配什么色好看| 哪个播放器可以免费观看大片| 激情五月婷婷亚洲| 一级毛片我不卡| 男女午夜视频在线观看| 黑丝袜美女国产一区| 亚洲美女视频黄频| 精品少妇内射三级| 日韩制服丝袜自拍偷拍| 女性生殖器流出的白浆| 日韩制服骚丝袜av| 黄色毛片三级朝国网站| 国产精品蜜桃在线观看| 国产欧美亚洲国产| 制服丝袜香蕉在线| 日韩精品有码人妻一区| 免费观看性生交大片5| 蜜桃国产av成人99| 国产在线免费精品| 国产国语露脸激情在线看| 亚洲色图综合在线观看| 午夜精品国产一区二区电影| 久久久久精品人妻al黑| 波多野结衣一区麻豆| 男男h啪啪无遮挡| 久久久a久久爽久久v久久| 中文字幕人妻丝袜制服| 亚洲精品乱久久久久久| 热99国产精品久久久久久7| 秋霞伦理黄片| 亚洲精品日韩在线中文字幕| 国产人伦9x9x在线观看 | 亚洲精品久久久久久婷婷小说| 久久人人爽av亚洲精品天堂| 晚上一个人看的免费电影| 叶爱在线成人免费视频播放| 亚洲成国产人片在线观看| 99久久精品国产国产毛片| 国产xxxxx性猛交| 天天操日日干夜夜撸| 国产精品女同一区二区软件| 欧美成人午夜免费资源| 国产成人精品在线电影| 最近最新中文字幕免费大全7| 中文字幕精品免费在线观看视频| 美女午夜性视频免费| 久久久久久久久久久久大奶| 在线观看国产h片| 999精品在线视频| 亚洲中文av在线| 少妇被粗大猛烈的视频| 热re99久久国产66热| 97在线人人人人妻| 熟女电影av网| 大码成人一级视频| 欧美日韩精品网址| 另类亚洲欧美激情| 五月伊人婷婷丁香| 啦啦啦视频在线资源免费观看| 高清在线视频一区二区三区| 国产成人精品久久二区二区91 | 亚洲精品,欧美精品| 精品少妇久久久久久888优播| 中文字幕制服av| 国产av码专区亚洲av| 大片电影免费在线观看免费| 亚洲欧美一区二区三区国产| 高清视频免费观看一区二区| 精品国产乱码久久久久久小说| 日韩欧美精品免费久久| 久久狼人影院| 亚洲一区二区三区欧美精品| 亚洲情色 制服丝袜| 天天躁夜夜躁狠狠躁躁| 免费女性裸体啪啪无遮挡网站| www.熟女人妻精品国产| 天天躁夜夜躁狠狠躁躁| 久久久国产精品麻豆| 少妇人妻久久综合中文| 少妇猛男粗大的猛烈进出视频| 国产野战对白在线观看| 亚洲av欧美aⅴ国产| 日本爱情动作片www.在线观看| 高清在线视频一区二区三区| 久久久久久久久久久久大奶| 天美传媒精品一区二区| 精品亚洲成国产av| 亚洲精品日韩在线中文字幕| 大片免费播放器 马上看| 男女无遮挡免费网站观看| 男女啪啪激烈高潮av片| 青春草亚洲视频在线观看| 久久精品国产亚洲av天美| 男女免费视频国产| 最近中文字幕高清免费大全6| 日本91视频免费播放| 亚洲精品久久久久久婷婷小说| 91精品国产国语对白视频| 巨乳人妻的诱惑在线观看| 只有这里有精品99| 亚洲婷婷狠狠爱综合网| 美女中出高潮动态图| 国产精品一国产av| 最近手机中文字幕大全| 赤兔流量卡办理| 国产麻豆69| 亚洲欧美一区二区三区国产| 18+在线观看网站| 搡女人真爽免费视频火全软件| 午夜激情av网站| 日韩电影二区| 91午夜精品亚洲一区二区三区| 亚洲国产精品一区三区| 女人高潮潮喷娇喘18禁视频| 电影成人av| 国产一区二区 视频在线| 丝袜脚勾引网站| 国产精品二区激情视频| 日日撸夜夜添| 丝袜脚勾引网站| 人人妻人人爽人人添夜夜欢视频| 久久99一区二区三区| 中国国产av一级| 啦啦啦啦在线视频资源| 午夜免费男女啪啪视频观看| 日本wwww免费看| 午夜激情av网站| 叶爱在线成人免费视频播放| av国产精品久久久久影院| 日韩视频在线欧美| 国产熟女午夜一区二区三区| 午夜福利影视在线免费观看| 国产不卡av网站在线观看| 99热网站在线观看| 韩国高清视频一区二区三区| 国产av码专区亚洲av| 久久热在线av| 日本91视频免费播放| 国产精品国产三级专区第一集| 精品少妇内射三级| 热99久久久久精品小说推荐| 亚洲综合色惰| 亚洲精品在线美女| 国产成人精品久久久久久| 啦啦啦中文免费视频观看日本| 青春草视频在线免费观看| 日韩不卡一区二区三区视频在线| 午夜激情av网站| 国产在线免费精品| 国产精品三级大全| 中文字幕亚洲精品专区| 久久免费观看电影| 午夜91福利影院| kizo精华| 一本—道久久a久久精品蜜桃钙片| 女性被躁到高潮视频| 久久久久精品人妻al黑| 国产激情久久老熟女| av在线app专区| 中文欧美无线码| 国产老妇伦熟女老妇高清| 人人妻人人爽人人添夜夜欢视频| 色婷婷av一区二区三区视频| 一区二区三区激情视频| 在线天堂最新版资源| 亚洲精品久久午夜乱码| av在线老鸭窝| 亚洲欧洲国产日韩| 日韩中文字幕欧美一区二区 | 老司机亚洲免费影院| 国产精品.久久久| 亚洲精品国产色婷婷电影| 丝袜喷水一区| 免费观看a级毛片全部| 下体分泌物呈黄色| 国产男女超爽视频在线观看| 久久ye,这里只有精品| 日本91视频免费播放| 中文字幕人妻熟女乱码| 亚洲欧美日韩另类电影网站| 国产有黄有色有爽视频| 日本爱情动作片www.在线观看| 午夜影院在线不卡| av片东京热男人的天堂| 亚洲国产欧美日韩在线播放| 久久久亚洲精品成人影院| 咕卡用的链子| 免费大片黄手机在线观看| 亚洲精品国产一区二区精华液| 久久精品国产鲁丝片午夜精品| 免费观看a级毛片全部| 亚洲精品第二区| 寂寞人妻少妇视频99o| 男女高潮啪啪啪动态图| 国产色婷婷99| 色婷婷av一区二区三区视频| 亚洲国产欧美日韩在线播放| 日韩一本色道免费dvd| 久久久久网色| 国产有黄有色有爽视频| 日日摸夜夜添夜夜爱| 国产白丝娇喘喷水9色精品| 天天操日日干夜夜撸| 91精品伊人久久大香线蕉| 1024香蕉在线观看| 国产97色在线日韩免费| 亚洲国产成人一精品久久久| 久久99蜜桃精品久久| 国产1区2区3区精品| 欧美老熟妇乱子伦牲交| 日韩一卡2卡3卡4卡2021年| 黄色 视频免费看| 婷婷色麻豆天堂久久| 国产精品嫩草影院av在线观看| 九草在线视频观看| av天堂久久9| 久久精品熟女亚洲av麻豆精品| 亚洲精品自拍成人| 亚洲av免费高清在线观看| 亚洲成色77777| 黄片播放在线免费| 国产午夜精品一二区理论片| av又黄又爽大尺度在线免费看| 亚洲少妇的诱惑av| 69精品国产乱码久久久| 国产精品国产三级国产专区5o| 欧美精品一区二区大全| 亚洲综合精品二区| 国产在线免费精品| 亚洲av电影在线观看一区二区三区| 综合色丁香网| 久久精品夜色国产| a级毛片在线看网站| 欧美人与性动交α欧美软件| 国产欧美亚洲国产| 中国国产av一级| 一区福利在线观看| 色吧在线观看| 亚洲精品一区蜜桃| 国产国语露脸激情在线看| 女的被弄到高潮叫床怎么办| 两性夫妻黄色片| 一本大道久久a久久精品| 国产成人精品久久久久久| 亚洲国产欧美网| 欧美+日韩+精品| 欧美少妇被猛烈插入视频| 国产精品麻豆人妻色哟哟久久| 亚洲av国产av综合av卡| 免费观看无遮挡的男女| 99re6热这里在线精品视频| 欧美日韩视频高清一区二区三区二| 欧美老熟妇乱子伦牲交| 国产精品三级大全| 成年动漫av网址| 精品午夜福利在线看| tube8黄色片| 可以免费在线观看a视频的电影网站 | 中文字幕人妻熟女乱码| 免费看av在线观看网站| 久久99蜜桃精品久久| 青春草国产在线视频| 男女啪啪激烈高潮av片| 久久人人爽人人片av| 麻豆av在线久日| www日本在线高清视频| 国产成人精品久久久久久| 超碰成人久久| 满18在线观看网站| 美女国产高潮福利片在线看| 纵有疾风起免费观看全集完整版| 女性被躁到高潮视频| 欧美精品人与动牲交sv欧美| 男人爽女人下面视频在线观看| 寂寞人妻少妇视频99o| 女性被躁到高潮视频| 成人毛片a级毛片在线播放| a 毛片基地| 亚洲精品日韩在线中文字幕| 亚洲欧美一区二区三区国产| 亚洲激情五月婷婷啪啪| 精品少妇一区二区三区视频日本电影 | 国产精品久久久久久精品古装| 街头女战士在线观看网站| 天堂俺去俺来也www色官网| 91国产中文字幕| 香蕉精品网在线| 日韩精品有码人妻一区| 熟妇人妻不卡中文字幕| 一区二区av电影网| 一个人免费看片子| 亚洲综合色网址| 两个人免费观看高清视频| 免费av中文字幕在线| 国产精品不卡视频一区二区| 两个人看的免费小视频| 欧美人与善性xxx| 免费观看性生交大片5| 久久久久久久久久久免费av| av电影中文网址| 纵有疾风起免费观看全集完整版| 九色亚洲精品在线播放| 高清黄色对白视频在线免费看| 久久久久精品性色| 男女啪啪激烈高潮av片| 少妇人妻精品综合一区二区| 久久99精品国语久久久| 婷婷成人精品国产| 亚洲熟女精品中文字幕| 久久人人97超碰香蕉20202| 亚洲伊人久久精品综合| 一个人免费看片子| 一级片'在线观看视频| 久久狼人影院| 一边亲一边摸免费视频| 午夜福利一区二区在线看| 亚洲av电影在线观看一区二区三区| 在线精品无人区一区二区三| 午夜福利视频精品| 国产av码专区亚洲av| 色婷婷久久久亚洲欧美| 国产成人精品无人区| 一二三四在线观看免费中文在| 免费人妻精品一区二区三区视频| 成人二区视频| 哪个播放器可以免费观看大片| 老熟女久久久| 中文字幕色久视频| 人妻 亚洲 视频| 两个人看的免费小视频| 国产精品.久久久| 美国免费a级毛片| 中文天堂在线官网| 国产精品国产三级国产专区5o| 成人黄色视频免费在线看| 乱人伦中国视频| 精品午夜福利在线看| 两个人免费观看高清视频| 在线观看美女被高潮喷水网站| 日韩熟女老妇一区二区性免费视频| av电影中文网址| 欧美精品人与动牲交sv欧美| 国产亚洲一区二区精品| 久久精品久久久久久噜噜老黄| 久久久国产欧美日韩av| 国产精品三级大全| 亚洲国产看品久久| 国产日韩一区二区三区精品不卡| 免费大片黄手机在线观看| 色播在线永久视频| 男女下面插进去视频免费观看| 人妻系列 视频| 日本av免费视频播放| 成年av动漫网址| 亚洲av.av天堂| 秋霞伦理黄片| 国产综合精华液| 久久女婷五月综合色啪小说| 2021少妇久久久久久久久久久| 久久久久久久久久久久大奶| 亚洲成人一二三区av|