• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Cross‐domain sequence labelling using language modelling and parameter generating

    2022-12-31 03:48:32BoZhouJianyingChenQianhuaCaiYunXueChiYangJingHe
    關(guān)鍵詞:閩粵老火藥膳

    Bo Zhou|Jianying Chen|Qianhua Cai|Yun Xue|Chi Yang|Jing He

    1School of Electronics and Information Engineering,South China Normal University,Foshan,China

    2Department of Neroscience,University of Oxford,Oxford,Oxfordshire,Britain

    Abstract Sequence labelling(SL)tasks are currently widely studied in the field of natural language processing.Most sequence labelling methods are developed on a large amount of labelled training data via supervised learning,which is time‐consuming and expensive.As an alternative,domain adaptation is proposed to train a deep‐learning model for sequence labelling in a target domain by exploiting existing labelled training data in related source domains.To this end,the authors propose a Bi‐LSTM model to extract more‐related knowledge from multi‐source domains and learn specific context from the target domain.Further,the language modelling training is also applied to cross‐domain adaptability facilitating.The proposed model is extensively evaluated with the named entity recognition and part‐of‐speech tagging tasks.The empirical results demonstrate the effectiveness of the cross‐domain adaption.Our model outperforms the state‐of‐the‐art methods used in both cross‐domain tasks and crowd annotation tasks.

    1|INTRODUCTION

    Linguistic sequence labelling(SL)is one of the classic tasks in natural language processing(NLP)whose purpose is to assign a label to each unit in a sequence and thus map a sequence of observations to a sequence of labels[1].As a pivot step in the most NLP applications,SL is widely applied to numerous real‐world issues including but not limited to part‐of‐speech(POS)tagging[2],named entity recognition(NER)[3]and word segmentation[4,5].Basically,SL algorithms exploit the manually‐labelled data,based on which effective approaches are required for leveraging distinctive features from limited information[6].Previous studies employing the supervised learning models mainly depend on high‐quality annotations of data[7].In such studies,the collection of annotated data is both time‐consuming and expensive.More recently,there is an ongoing trend to apply deep‐learning‐based models to detect the distinctive feature in SL tasks[8].Specifically,the recurrent neural network(RNN)is both creative and practical in obtaining the long‐term relation within sequential structure[2].

    More recently,the SL of cross‐domain is pronounced.Ongoing studies report that parameter transferring and language model show their superiorities in cross domain adaptation[9].On the one hand,parameter transferring contains parameter sharing and generating.By using trained models,the former aims to transfer the shared information from the source domain to target ones[10],while the latter result in a variety of learnable parameters for information extraction across different domains[11].On the other hand,LM is capable of capturing both the targets and the context patterns during training[12].As reported in Ref.[13],on the task of named entity recognition(NER),the transferring of knowledge is performed by contrasting large raw data in both domains through cross‐domain LM training.Besides,the effectiveness of attention mechanisms is also highlighted due to its dynamically aggregating the specific knowledge among sources[7].

    Notwithstanding,the use of the aforementioned approaches are still limited mainly because there is a significant drop for domains of big differences.That is,current methods fail to apply the trained model to the target domain whenever the shared knowledge is absent.In spite of the adaptation into one single target domain,the knowledge transferring to a variety of new domains is still challenging.Furthermore,while restricted to modelling training in a source domain,the learning of specific knowledge within the target domain has the potential to benefit the SL tasks as well.

    There is a considerable gap between effectively delivering the trained model to multi‐domains and the state‐of‐the‐art results.In this work,we tend to approach this problem in two ways,that is,extracting shared information from multi‐source domains and learning specific knowledge from the target domain.In this way,based on the Bi‐LSTM model,an accurate and efficient approach for SL among distinguishing domains is designed and deployed.Typically,our contributions are threefold:

    (1)Parameter sharing is conducted via the model training process within multi‐tasks and multi‐domains.Besides,the model parameters are specifically selected in line with the context of the target domain.

    (2)LM is dedicatedly established and performed as the pretraining task of the model,which aims to learn the contextual information and domain‐specific knowledge.

    (3)The attention mechanism is applied to collect the domain‐specific knowledge from different source domains.In this way,more relevant information from source domains is conveyed to the target domain.

    2|RELATED WORK

    2.1|Sequence labeling(SL)

    Classical methods for SL are generally linear statistical models,including Hidden Markov Models(HMMs)[14],Maximum Entropy Markov Models(MEMMs)[15],Conditional Random Fields(CRFs)[16]etc.All these models heavily rely on the hand‐crafted features and task‐specific resources.In contrast,the deep neural networks facilitate the process by automatically extracting features from raw text via model training.Both CNN‐and RNN‐based models are built up to deal with such issues.Zhao et al.[1]propose a Deep Gated Dual Path CNN architecture to capture a large context through stacked convolutions.Yang et al.[10]devise a transfer learning approach based on a hierarchical RNN that exploits the information from different lingual/domain data sets by sharing multi‐task model parameters.As a commonly‐used RNN variant,long short‐term memory(LSTM)is widespread in existing studies.With the integration of CRF,BiLSTM‐based methods are deemed best able to achieve state‐of‐the‐arts performance across various SL tasks[2,6,17].

    2.2|Parameter transferring

    Generally,the main purpose of parameter transferring is to improve the performance on a target task by joint training with a source task.Parameter sharing,well‐known for its use in across‐languages/domains/applications,has an impressive performance in tasks with fewer available annotations[10,18].With the combination of domain properties,not only the domain specific parameters but also the representation vectors can be derived[19].Nevertheless,multi‐task learning based on parameter sharing has the limitation in parameter setting due to potential conflict of information[13].In an effort to mitigate this deficiency,parameter generating is one such direction,with previous publications exploring its feasibility and validating its efficacy.Platanios et al.[11]devise a contextual parameter generator(CPG)that generates the parameters for the encoder and the decoder of a neural machine translation system.On the task of NER,Jia et al.[13]propose a parameter generating network of deep‐learning models to transfer knowledge across domains and applications.

    2.3|Language model

    So much is the significance of training the model that language models(LMs)are employed in neural networks to obtain specific representations in multi‐tasks[20].Liu et al.[21]construct a task‐aware neural LM termed as LM‐LSTM‐CRF,which incorporates character‐aware neural LM to extract character‐level embedding.Liu et al.[22]propose a way of compressing LM as a module of RNN.Accordingly,the LM for preserving useful information with respect to specific tasks can be applied to cross‐domain SL tasks.

    3|OUR APPROACH

    3.1|Problem definition

    For a given sentenceSi=[Si,1,Si,2,…,Si,n]from multi‐source domains,we takeskas thekth source domain andstas the target domain.Let Θ=(W?Ik)(k=1,2,…,p)be the parameters involved in each domain whereWstands for the shared parameters andIkfor the domain‐specific parameters and?is the tensor contraction processing.That is,the parameters for every source domain are the integration ofWandIk.Similarly,in the target domain,we have the parameters composing ofW,Ilmtandwhereis pretrained from the language model task andItslis the outcome of integratingIkand the target‐domain contexts.Figure 1 presents the framework of the multi‐domain SL task,which is the base of our model.

    3.2|Model establishing

    Figure 2 shows the architecture of our model.For each input sentence,the proposed model firstly maps the words into word embeddings via the shared input layer.Subsequently,a Bi‐LSTM is employed for context encoding and a parameter generator is taken to resolve the parameters of Bi‐LSTM within the parameter generating layer.Specifically,for the inputs of SL tasks,the attention mechanism is also applied to parameter generating.Lastly,the outcomes are sent to different tasks in line with the processing in the output layer.

    FIGURE 1 Illustration of using the multi‐source domain model in SL.SL represents sequence labelling tasks and LM represents language modelling

    FIGURE 2 Model overview

    The following sections describe each component of the proposed model in more detail.

    3.3|Shared embedding layer

    We define a sentence collectionS=[S1,…,Si,…,Sm]with the corresponding label

    FIGURE 3 Concatenation principle of word embedding and character embedding.Module input:a word from a given sentence;module output:concatenation of word embedding and character embedding

    that comes from the target domain and of SL tasks.Letbe the input sentence with the corresponding label.On the task of SL,the input from thek(k=1,2,…,p)‐th source domain is defined aswhile that of the target domain is.Similarly,the related raw text in thekth source domain isand that in the target domain is given as.

    As shown in Figure 3,in this layer,a convolutional neural network(CNN)is performed to extract the character‐level features from the input sequence,whose outputs are concatenated with the word embeddings.Accordingly,we have the layer output as follows:

    wherevi,jis the word representation of wordsi,j,ewis a shared word embedding lookup table andecis the specific shared character embedding lookup table.

    3.4|Parameter generating layer

    The parameter generating layer is constructed on the basic Bi‐LSTM,which aims to transfer knowledge across domains via parameter transferring.In addition to the shared parameter,a parameter generator is devised for Bi‐LSTM parameter determination.Figure 4 shows the parameter transferring scheme together with its principle.Concretely,each input sentence can be classified into the following category based on the type of task and the specific domain:

    If a sentence is of either the LM task or SL task from the source domain(s),the Bi‐LSTM parameter set is generated directly in the parameter generator.

    If a sentence is of the LM task from the target domain,the Bi‐LSTM parameters are generated directly in the parameter generator.

    If a sentence is of the SL task from the target domain,the Bi‐LSTM parameters are generated with the parameter generator and attention.

    Basically,for a sentence from the source domain,the parameter setapplied to Bi‐LSTM is delivered as follows:

    FIGURE 4 Parameter generation principle.Module input:a sentence of the target domain or multi‐source domains;module output:long short‐term memory(LSTM)hidden state of each word in the sentence

    whereWstands for the shared parameter withW?RP(LSTM)×UandIkindicates that from thekth source domain withIk?RU;P(LSTM)is the total number of parameters;?refers to processing of tensor concatenation.

    With the word representationvi,jfrom the shared embedding layer sent to the Bi‐LSTM,the outputs of hidden states of both forward and backward directions are derived as follows:

    Similarly,for the sentence of the LM task from the target domain,the parameters of Bi‐LSTM is obtained in the same process as presented in Equation(2)‐(4).Notably,this LM task is considered as a pretraining step of SL,whose parameter set can be further exploited in SL tasks.

    In terms of input of the SL task from the target domain,the attention module is carried out to facilitate the parameter generating(see Figure 5).Based on the pretraining from LM,we have the representation of sentencesiby concatenating the last hidden states of forward LSTM and backward LSTM,which is

    At this stage,the target domain‐specific sentence representations,as well as the source domain parameterIk,are fed into the attention module to obtain a normalised weight for each source domain.That is,the attentive weightαkiof each source domain denoting the source‐domain knowledge can be conveyed as

    FIGURE 5 Attention module in parameter generating layer.Module input:sentence embedding and parameters of multi‐source domains;module output:Target domain‐specific long short‐term memory(LSTM)parameter of each sentence

    where

    Computation of target‐domain parameter for the SL task is facilitated by using the source‐domain attention weight:

    As presented in Equation(8),more information from source domains is integrated intoof the target domain.Correspondingly,the parameter set of Bi‐LSTM,which refers to the SL task of the target domain,is obtained,that is,

    Notably,computing of attention weights is eliminated with respect to one single source domain.As such,the LSTM parameters of the target domain are obtained by integrating shared parameters and the dataset parameters,which is given as

    Subsequently,for the Bi‐LSTM unit,the parameterand the inputvi,jare taken to resolve the hidden‐state outputs.We have indeed

    3.5|Output layer

    As mentioned above,for both source domains and the target domain,the input sentences are of either the LM task or SL task.Specifically,we employ the CRFs to process sentences of SL and Negative Sampling Softmax(NSSoftmax)to those of ML as presented in Figure 6.Each component is depicted as follows:

    CRFs[13]:In terms of the SL tasks,the output of the parameter generating layer is the concatenating of hidden states from forward and backward LSTM, that is,with the corresponding label sequenceyi=yi,1,…,yi,j,…,yi,n.The probability on labelyiis defined as follows:

    whererefers to an arbitrary label sequence;is the weight parameter with respect toyi,j;is the bias toyi,jandyi,j?1.Notably,the first‐order Viterbi algorithm is taken to extract the highest scored label sequence,while the proposed CRF is shared across the source and target domains.

    NSSoftmax[13]:In line with the Bi‐LSTM model,the forward hidden states and the backward hidden states are applied to a forward‐LM and a backward‐LM,respectively.Given the forward LSTM hidden statethe probability of the next wordsi,j+1onsi,1:jcan be computed by using the NSSoftmax:

    FIGURE 6 Output layer.Module input:long short‐term memory(LSTM)hidden states;Conditional Random Field(CRF)output:sequence labelling(SL)outcomes of words sentence;NSSoftmax output:probability on word in the sentence

    and

    where#si,jis the vocabulary index of the target wordsi,j,WTis the transpose of the corresponding target word vector andb#the target word bias.In Equation(13),Zstands for the normalisation item,andNsi,jis the negative sample set ofsi,j.Each element in the set is a random number from 1 to the cross‐domain vocabulary size.

    Similarly,the probability of the prior wordsi,j?1onsi,j:ncan be obtained from the forward LSTM hidden statewhich is

    3.6|Model training

    Since the SL of the target domain is considered as zero‐resource learning with cross‐domain adaptation,the training processes are carried out on SL of source domains and LM on both target domain and source domains.

    Sequence labelling of source domains:For a labelled dataset,the negative log‐likelihood loss is conducted for model training,which is

    LM of source domains and the target domain:Given a labelled datasetfrom source domains,the forward‐and backward‐LM are jointly trained by using NSSoftmax,whose loss function is expressed as

    可見(jiàn),老火湯雖深受中醫(yī)食療思想的影響,但并不等同于藥膳。那么,“老火湯”是何時(shí)開(kāi)始成為廣東居民的日常飲食呢?徐珂《清稗類鈔》曾載曰:“(閩粵人)餐時(shí)必佐以湯。”[注](清)徐珂編撰:《清稗類鈔13》,中華書局,1986年,第6242頁(yè)。即至少在晚清時(shí)期,粵人已經(jīng)形成了吃飯喝湯的飲食習(xí)慣。但結(jié)合其他文獻(xiàn)來(lái)看,清代粵人日常所食之湯應(yīng)非“老火湯”類的湯品。

    In such a manner,for a datasetfrom a target domain,the loss function is

    In most cases,we jointly train the SL and LM on both target domain and source domains.The overall loss is denoted as

    whereλtis the task weight of LM,λis the weight ofL2regularisation and Θ stands for the parameters set.

    4|EXPERIMENT

    In line with the purpose of SL tasks,we conduct two kinds of evaluations,one for evaluating the cross‐domain adaptability of the proposed model in NER and POS tagging tasks,the other for evaluating the working performance in NER via the training using crowd‐sourced data.As mentioned above,the crowd annotation task has only one source dataset,whose target domain parameters are generated without calculating the attention weights.

    4.1|Dataset

    Cross‐domain dataset:On the task of cross‐domain adaptability evaluating,both OntoNotes v5[7,23]and Universal Dependencies(UD)v2.3[7,24]are taken as the datasets for the cross‐domain sequence label.

    OntoNotes v5[23]:The OntoNotes v5 is generally applied to NER tasks.In this experiment,the English part of Onto-Notes v5,which involves 9 named entities and 6 domains,is selected.Specifically,the 6 domains are broadcast conversation(BC),broadcast news(BN),magazine(MZ),newswire(NW),telephone conversation(TC),and web(WB).

    Universal Dependencies(UD)v2.3[24]:The GUM part of UD v2.3 is used in the POS tagging task.This dataset is annotated with 17 tags and 7 fields.The 7 fields are the following:academic,bio,fiction,news,voyage,wiki,and interview.

    Details of each cross‐domain dataset is exhibited in Table 1.

    Crowd‐Annotation Datasets:We use the crowd‐annotation datasets[7]based on the 2003 CoNLL dataset[25],while Amazon’s Mechanical Turk(AMT)[26]as the testing set for the NER task.Statistics of the datasets in this experiment are shown in Table 2.

    Similar to Ref.[7],all the sentences as well as the corresponding entities are selected for further processing.All the datasets are subdivided into train sets,development sets and test sets(see Table 3).

    4.2|Hyperparameter settings

    In this experiment,the NCRF++is taken as the basic model for a variety of tasks[27].The dimension of character‐level embedding,word‐level embedding and the LSTM hidden layer are set to 30,100,and 200,respectively,while the training episode is set to 100 epochs.All word embeddings are initialised using 100‐dimensional word vectors pretrained by Glove[28]and fine‐tuned during training.The character embeddings are obtained via random initialisation[13].The batch sizes of cross‐domain NER and cross‐domain POS tagging are 10 and 20,respectively,with the learning rate of 0.01 using RMSprop.We use domain‐specific parameters of size 200.

    4.3|Baseline models

    To comprehensively evaluate the performance of the proposed model,4 baseline methods are taken for comparison in NER tasks.For all the baseline models,the dimension of character‐level,word‐level embeddings and the LSTM hidden layer are set to 30,100 and 150,respectively.In addition,model parameters are updated by stochastic gradient descent(SGD).The learning rate is initialised as 0.015 and decayed by 5% of each epoch.

    MTL‐MVT[29]:Multi‐source data is sent to a multi‐task learning model with parameter sharing among different tasks.The predicted labels from multi‐sources are voted,aggregated and applied to the target domain.

    MTL‐BEA[30]:Based on a probabilistic graphical model,a transfer model and a generative model is established.Correspondingly,the transition probability is computed and the label for the target domain is thus generated.

    Co‐NER‐LM[13]:The cross‐domain‐adaption is performed using cross‐domain LM while the knowledge transfer is carried out by designing a parameter generating network.

    TABLE 1 Statistics of cross‐domain datasets

    TABLE 2 Statistics of crowd‐annotation datasets

    TABLE 3 Dataset subdivision

    MULTI‐TASK+PGN[13]:A parameter generation network is developed to generate the parameters of LSTM from both the source domain and target domain.

    ConNet[7]:Representation of each source is learnt from annotation of multiple sources.In addition,a context‐aware attention module is exploited to dynamically aggregate source‐specific knowledge.

    BERT‐BASE[31]:The pretrained BERT is employed to extract contextual information while the parameters are fine‐tuned on specific tasks.

    For the purpose of model optimisation,our method is trained using the crowd‐sourced data,whose effectiveness is verified in the NER tasks as well.On this occasion,6 crowd‐sourced‐training‐models are taken as the baselines,which are the following:

    Crowd‐Add[32]:An LSTM‐Crowd model is devised where crowd components are element‐wise added to tags scores.

    Crowd‐Cat[32]:In a LSTM‐Crowd‐based‐model,the crowd vectors are concatenated to the outputof the LSTM hidden layer.

    MVT‐SLM[7]:The majority voting,based on the crowd annotation data,is conducted on the token level.Thus,the majority of labels is selected as the gold label for each token.

    MVS‐SLM[7]:Similar to MVT,the majority voting is at the sequence level.

    CRF‐MA[26]:A probabilistic approach is devised for the sequence label using CRFs with data from multiple annotators,which relies on a latent variable model where the reliability of the annotators are handled as latent variables.

    CL‐MW(MW)[33]:A crowd layer is integrated into a CNN model to learn the weight matrices.The trained model is applied to label prediction.

    4.4|Results

    Evaluation on cross‐domain adaptability:The experimental results of our model compared to the baseline methods are shown in Table 4 and Table 5.The working performance of cross‐domain NER tasks is demonstrated by F1 score while that of cross‐domain POS tagging by accuracy.Among all these methods,the proposed model produces results competitive with the edge‐cutting ConNet model.For the cross‐domain NER tasks,our model obtains the highest F1 score on the evaluation settings of tasks of WB,TC and BC(see Table 4).On the other hand,Table 5 shows that our model achieves the best average accuracy,which outperforms ConNet 0.07%;the main reason is that the use of the attention mechanism aggregates shared knowledge from multi‐sources and thus eliminates the discrepancy among domains.In addition,the domain‐specific knowledge is also obtained via LM.By contrast,both MULTI‐TASK+PGN and Co‐NER‐LM use the LM as a bridge,but ignore the discrepancy among different domains.Moreover,MTL‐MVT is constructed on the basis of cross‐domain parameter sharing and MTL‐BEA exploits the probabilistic graphical model to predict the domain‐specific knowledge,whereas both of these models fail to makefull use of the knowledge from source domains and model the difference among domains.Since our model employs the parameter transferring as well as LM training,it is reasonable to expect better performance in different target domains,as it is the case.

    TABLE 4 Experimental results of cross‐domain named entity recognition(NER)

    TABLE 5 Experimental results of cross‐domain part‐of‐speech(POS)tagging

    However,in comparison with the state‐of‐the‐art methods,our method fails to exceed the performance of the best methods in all the cross‐domain adaptability evaluation.According to Table 4,the accuracy of our model is not as high as that of ConNet in three evaluation sets.Pretraining using LM not just learns domain‐specific knowledge but also introduces unrelated domain information,which results in the drop of accuracy.Another possible explanation for this is that the multi‐domain transferring largely depends on the selecting of the source domain.Taking the NER task as an example,we focus on analysing the error by removing a specific domain every time from the current source domains.The results are presented in Figure 7.In some cases,the target domain and the source domain are of close relation.For instance,both BN and BC concern the broadcasting while BN and NW relate to journalism.In this way,for the target domain of either BC or NW,the F1 score decreases substantially due to the removal of BN.According to Figure 7,the working performance separately drops by 3.62% and 3.25% for target domains BC and NW without the source domain BN,which is significant.By contrast,for domains of little association,for example,TC and BN(the former contains a large number of colloquial expressions while the latter involves formal statements),there exist less impact on each other.As an example,for the target domain BN,the F1 score even improves by 0.45% while removing the source domain TC.We shall thus infer that the close connection between the target domain and source domain will effectively decrease the recognition error and facilitate the cross‐domain adaptation.

    FIGURE 7 Drop of F1 by removing one source domain

    Furthermore,the attention weight between any two different domains of the dataset OntoNotes v5 is investigated(Figure 8).Similar to the aforementioned inference,the domains of a closer relation make a greater contribution to each other in the NER task.One can easily see that the highest two attentive scores are generated between BN and NWas well as BC and BN,which conforms to our analysis of the recognition error.

    Evaluation on crowd‐annotation training:The working performance of our model and the baselines on the real‐world dataset AMT is exhibited in Table 6.There is a considerable gap of F1‐score between our model and the other 10 methods.The minimum performance gap of 2.81% is observed against the Co‐NER‐LM model.In this way,our model presents the superiority in learning from the noisy annotation data.Typically,most widely‐applied models obtain a comparatively high precision but a low recall.This issue is on account of the deficiency in capturing the entity information of the target domain from multiple annotators.Note that only one source domain is used in the crowd‐annotation task,and the application of domain‐specific knowledge is no longer superior to the baselines.In addition,the crowd‐annotation dataset contains a certain amount of noise that affects the training results.Comparing with the baselines,the LM of our method cannot just learn the contextual information but also filter the noise from samples.As such,a higher recall as well as a higher F1‐score,is accessible.Clearly,our model is a better alternative to the state‐of‐the‐art methods.

    FIGURE 8 Attention weight between different domains.The vertical axis represents the target domain and the horizontal axis represents the source domain.The value on the right side stands for the attention weight

    TABLE 6 Experimental results of real‐world crowd‐sourced named entity recognition(NER)

    4.5|Ablation experiment

    In order to determine the importance of the different components in our model,an ablation study is carried out on cross‐domain tasks.Our model is taken as the baseline.Specifically,MULTI‐TASK‐without LM represents the removal of LM from the model,which is trained in the source domain using SL tasks,with parameter generating via the attention mechanism;MULTI‐TASK‐without attention stands for the ablation of the attention network while the model is trained based on SL and LM tasks;STM‐without LM andAttention denotes that the model is trained in the source domain only using SL tasks,without LM and attention mechanism.

    According to Table 7,the removal of the attention mechanism and LM results in the F1 decline of 2.50% and 2.86% on average in NER tasks.Likewise,for the POS task,the accuracy drop of MULTI‐TASK‐without LM and MULTI‐TASK‐without attention is 1.25%and 1.18%,respectively(see Table 8).The STM‐without LM&Attention model has the worst results in all evaluation settings.For NER task,since LM is more capable of learning domain‐specific knowledge and contextual information,the average F1 of MULTI‐TASK‐without attention is slightly higher than that of MULTI‐TASK‐without LM.By contrast,the POS information is irrelevant to the domain.Thus,the contribution of LM and the attention mechanism is comparable in POS tasks.

    4.6|Case study

    To further verify the capability of the proposed model,it is advisable to visualise the method presentation.In this case,three sentences are selected and applied to the NER task.The proposed model with and without LM,as well as ConNet are taken for comparison.According to Figure 9,both ConNet and our model captures all the nouns and predicts their labels successfully.By contrast,the proposed model without LM misidentifies the entities‘Soviet’,‘West’and‘Indies’.In line with the real label,all these entities in the three sentences can be precisely recognised,which indicates the effectiveness of our model in SL tasks.

    TABLE 7 Ablation study in the cross‐domain named entity recognition(NER)task

    TABLE 8 Ablation study in cross‐domain part‐of‐speech(POS)task

    FIGURE 9 An example of sentence prediction results.B–beginning of the entity,I–intermediate of the entity.O–not an entity,MISC–other entity,PER–personal name,LOC–location name,and ORG–organisation name

    5|CONCLUSION

    In this work,we establish a Bi‐LSTM based‐architecture in SL,which integrates the parameter transferring principle,attention mechanism,CRF and NSSoftmax.Despite the discrepancy of information distribution among domains,the proposed model is capable of extracting more‐related knowledge from multi‐source domains and learning specific context from the target domain.With the LM training,our model thus shows its distinctiveness in cross‐domain adaptation.Experiments are conducted on NER and POS tagging tasks to validate that our model stably obtains a decent performance in cross‐domain adaptation.In addition,with the training of crowd‐annotation,the experimental results for NER are further improved,indicating the effectiveness of learning from noisy annotations for higher‐quality labels.

    ACKNOWLEDGMENTS

    This work was supported by the National Statistical Science Research Project of China under Grant No.2016LY98,the Science and Technology Department of Guangdong Province in China under Grant Nos.2016A010101020,2016A010101021 and 2016A010101022,the Characteristic Innovation Projects of Guangdong Colleges and Universities(No.2018KTSCX049),and the Science and Technology Plan Project of Guangzhou under Grant Nos.202102080258 and 201903010013.

    CONFLICT OF INTEREST

    The authors declare no conflict of interest.

    DATA AVAILABILITY STATEMENT

    Data that support the findings of this study are available from the corresponding author upon reasonable request.

    ORCID

    Bo Zhouhttps://orcid.org/0000-0001-8097-6668

    猜你喜歡
    閩粵老火藥膳
    閩粵聯(lián)網(wǎng)工程有源濾波器設(shè)計(jì)方案及其二次控保系統(tǒng)研究
    可緩解干咳的兩款藥膳
    能讓你越喝越瘦的藥膳方
    濕氣不可怕,藥膳起居健康行
    EATING MEDICINE
    六安瓜片烘焙機(jī)械與傳統(tǒng)工藝的比較
    從中法身稅交涉看近代旅越閩粵商幫的利益訴求與歷史演變
    改善消化不良的兩款藥膳
    喝老火湯不講“老”
    魯閩粵瓊政府工作報(bào)告最亮點(diǎn)
    1024香蕉在线观看| 免费观看性生交大片5| 99久久精品国产国产毛片| 久久99蜜桃精品久久| 美女高潮到喷水免费观看| 在线亚洲精品国产二区图片欧美| 成人影院久久| av网站免费在线观看视频| 18+在线观看网站| 国产无遮挡羞羞视频在线观看| 国产精品女同一区二区软件| 视频区图区小说| 大香蕉久久成人网| 蜜桃在线观看..| 老汉色∧v一级毛片| 久久青草综合色| 国产极品粉嫩免费观看在线| 纵有疾风起免费观看全集完整版| 性高湖久久久久久久久免费观看| 亚洲经典国产精华液单| 午夜福利乱码中文字幕| 亚洲第一av免费看| 宅男免费午夜| 一级毛片黄色毛片免费观看视频| 狠狠婷婷综合久久久久久88av| 日本91视频免费播放| 咕卡用的链子| 成人国产av品久久久| 精品国产露脸久久av麻豆| 欧美av亚洲av综合av国产av | 久久青草综合色| xxx大片免费视频| 午夜日本视频在线| 叶爱在线成人免费视频播放| 日韩欧美一区视频在线观看| 日日摸夜夜添夜夜爱| 亚洲精品乱久久久久久| 一级,二级,三级黄色视频| 国产成人精品福利久久| 国产成人一区二区在线| 国产精品香港三级国产av潘金莲 | 校园人妻丝袜中文字幕| 久久久久网色| 精品亚洲成a人片在线观看| 另类亚洲欧美激情| 国产一区二区三区综合在线观看| 免费看av在线观看网站| 国产成人av激情在线播放| 欧美另类一区| 亚洲美女视频黄频| 永久网站在线| 叶爱在线成人免费视频播放| 日韩,欧美,国产一区二区三区| 97在线人人人人妻| 亚洲国产成人一精品久久久| 丝袜脚勾引网站| 久久午夜福利片| 我的亚洲天堂| 91久久精品国产一区二区三区| 日本wwww免费看| 最新的欧美精品一区二区| 久久女婷五月综合色啪小说| 亚洲精品美女久久av网站| 国产淫语在线视频| 久久久久久久国产电影| 欧美精品亚洲一区二区| 亚洲色图综合在线观看| 国产亚洲欧美精品永久| 日韩精品有码人妻一区| 午夜福利网站1000一区二区三区| 蜜桃在线观看..| 午夜影院在线不卡| 亚洲美女搞黄在线观看| 少妇 在线观看| 亚洲综合精品二区| 免费观看性生交大片5| 亚洲精品国产色婷婷电影| 成人国产av品久久久| 大码成人一级视频| 成人黄色视频免费在线看| 电影成人av| 国产av精品麻豆| 国产av一区二区精品久久| 涩涩av久久男人的天堂| 亚洲三级黄色毛片| 国产精品久久久久成人av| 青青草视频在线视频观看| 777米奇影视久久| 国产精品无大码| 色网站视频免费| 亚洲精品美女久久久久99蜜臀 | 日韩三级伦理在线观看| 亚洲激情五月婷婷啪啪| 免费女性裸体啪啪无遮挡网站| 波野结衣二区三区在线| 国产又色又爽无遮挡免| 久热这里只有精品99| 国产野战对白在线观看| 亚洲精品国产av蜜桃| 天天躁狠狠躁夜夜躁狠狠躁| 国产爽快片一区二区三区| 丰满少妇做爰视频| 最近中文字幕高清免费大全6| 亚洲欧洲精品一区二区精品久久久 | 亚洲一级一片aⅴ在线观看| 亚洲美女视频黄频| 精品久久久久久电影网| 久久久久久伊人网av| 波多野结衣一区麻豆| 国产精品无大码| 久久免费观看电影| 中文天堂在线官网| 国产成人精品久久二区二区91 | 狂野欧美激情性bbbbbb| 99久久精品国产国产毛片| 大片免费播放器 马上看| xxxhd国产人妻xxx| 亚洲欧洲日产国产| 国产精品免费大片| 日韩大片免费观看网站| 国产成人91sexporn| 国产一区二区在线观看av| 日本av免费视频播放| 国产女主播在线喷水免费视频网站| 中文字幕人妻丝袜制服| 黄频高清免费视频| 人人妻人人澡人人爽人人夜夜| 成人免费观看视频高清| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 亚洲精品一二三| 国产不卡av网站在线观看| 亚洲精华国产精华液的使用体验| 国产不卡av网站在线观看| 国产精品国产av在线观看| 亚洲第一av免费看| 美女中出高潮动态图| 久久精品熟女亚洲av麻豆精品| 亚洲欧美色中文字幕在线| 久久国产精品男人的天堂亚洲| 亚洲美女搞黄在线观看| 一区二区日韩欧美中文字幕| 午夜免费男女啪啪视频观看| 亚洲久久久国产精品| 国产精品.久久久| 性高湖久久久久久久久免费观看| 伦理电影大哥的女人| 国产不卡av网站在线观看| 不卡视频在线观看欧美| 亚洲国产日韩一区二区| 午夜福利在线观看免费完整高清在| av在线观看视频网站免费| 69精品国产乱码久久久| 中国三级夫妇交换| 97在线视频观看| 在线观看免费高清a一片| 丝袜美足系列| 午夜日韩欧美国产| av女优亚洲男人天堂| 国产精品免费视频内射| 国产精品国产三级国产专区5o| 中文字幕制服av| 国产爽快片一区二区三区| 午夜福利视频在线观看免费| 国产成人午夜福利电影在线观看| 国产高清不卡午夜福利| 制服丝袜香蕉在线| 亚洲精品视频女| 国产精品成人在线| 国产97色在线日韩免费| 久久综合国产亚洲精品| 激情视频va一区二区三区| 精品少妇一区二区三区视频日本电影 | 赤兔流量卡办理| 飞空精品影院首页| 久久精品国产a三级三级三级| 国产亚洲av片在线观看秒播厂| 十八禁高潮呻吟视频| 国产淫语在线视频| 成人免费观看视频高清| 一级片免费观看大全| 99热国产这里只有精品6| 成年女人毛片免费观看观看9 | 国产极品粉嫩免费观看在线| 日韩制服骚丝袜av| 最近最新中文字幕免费大全7| 久久青草综合色| 一区二区三区乱码不卡18| 下体分泌物呈黄色| 欧美 日韩 精品 国产| 精品亚洲成国产av| 日韩电影二区| 中文字幕人妻丝袜一区二区 | 亚洲av综合色区一区| 天堂俺去俺来也www色官网| 亚洲国产成人一精品久久久| 女人高潮潮喷娇喘18禁视频| 久久亚洲国产成人精品v| 久久久久精品人妻al黑| 久久国产精品大桥未久av| 免费高清在线观看视频在线观看| 日韩一卡2卡3卡4卡2021年| 久久久久视频综合| 成人18禁高潮啪啪吃奶动态图| 久久热在线av| 一本大道久久a久久精品| 热re99久久国产66热| 又黄又粗又硬又大视频| 人妻系列 视频| 国产无遮挡羞羞视频在线观看| 欧美成人精品欧美一级黄| 女的被弄到高潮叫床怎么办| 精品99又大又爽又粗少妇毛片| 国产野战对白在线观看| 久热这里只有精品99| 亚洲,欧美,日韩| 99精国产麻豆久久婷婷| 久久免费观看电影| 国产国语露脸激情在线看| 亚洲av成人精品一二三区| 又粗又硬又长又爽又黄的视频| 搡老乐熟女国产| 国产片内射在线| 国产免费视频播放在线视频| 一区二区三区乱码不卡18| 久久人人爽人人片av| 亚洲国产成人一精品久久久| 精品亚洲成a人片在线观看| 午夜免费观看性视频| 午夜福利,免费看| 亚洲欧美精品自产自拍| 视频在线观看一区二区三区| 视频区图区小说| 成人影院久久| 99九九在线精品视频| 一区二区三区激情视频| 一级爰片在线观看| 黄色配什么色好看| 亚洲av欧美aⅴ国产| 中文字幕人妻熟女乱码| 国产激情久久老熟女| 大片免费播放器 马上看| 少妇 在线观看| 亚洲图色成人| 国产亚洲av片在线观看秒播厂| 国产伦理片在线播放av一区| 高清黄色对白视频在线免费看| 国产麻豆69| 制服人妻中文乱码| 亚洲四区av| 久久精品久久久久久久性| 国产xxxxx性猛交| 国产欧美亚洲国产| 国产成人精品久久二区二区91 | 精品少妇一区二区三区视频日本电影 | 成人毛片a级毛片在线播放| 亚洲人成电影观看| 欧美精品国产亚洲| 青春草亚洲视频在线观看| 好男人视频免费观看在线| 三级国产精品片| 伊人久久国产一区二区| 精品视频人人做人人爽| 亚洲av电影在线进入| 亚洲美女黄色视频免费看| 国产综合精华液| 99热全是精品| 日韩精品免费视频一区二区三区| 久久鲁丝午夜福利片| 亚洲一区二区三区欧美精品| 日韩一卡2卡3卡4卡2021年| 亚洲成人手机| 日韩 亚洲 欧美在线| 亚洲美女搞黄在线观看| 精品人妻在线不人妻| 亚洲av电影在线进入| 亚洲av免费高清在线观看| 国产成人欧美| 丝瓜视频免费看黄片| a级片在线免费高清观看视频| 中文字幕人妻丝袜制服| 最黄视频免费看| 男女边吃奶边做爰视频| 午夜福利视频精品| 麻豆精品久久久久久蜜桃| 国产免费又黄又爽又色| 性高湖久久久久久久久免费观看| 亚洲av国产av综合av卡| 成人国语在线视频| 看免费av毛片| 国产精品99久久99久久久不卡 | 色婷婷久久久亚洲欧美| 韩国高清视频一区二区三区| 国产淫语在线视频| 丝袜美腿诱惑在线| 亚洲熟女精品中文字幕| 夫妻午夜视频| 免费在线观看视频国产中文字幕亚洲 | 人人妻人人澡人人看| 日韩av免费高清视频| 毛片一级片免费看久久久久| 啦啦啦啦在线视频资源| 久久精品aⅴ一区二区三区四区 | 欧美日韩精品成人综合77777| 多毛熟女@视频| 夜夜骑夜夜射夜夜干| 高清在线视频一区二区三区| 大香蕉久久网| av在线观看视频网站免费| av网站在线播放免费| 丰满少妇做爰视频| 久久久久久人人人人人| 伦精品一区二区三区| 久久久久久久久久久久大奶| 中国三级夫妇交换| 亚洲国产欧美网| 欧美亚洲日本最大视频资源| 18禁国产床啪视频网站| 亚洲av日韩在线播放| 国产成人91sexporn| 一二三四中文在线观看免费高清| 一边摸一边做爽爽视频免费| 久久狼人影院| 大话2 男鬼变身卡| 中国三级夫妇交换| 久久97久久精品| 蜜桃国产av成人99| 国产精品不卡视频一区二区| a级毛片在线看网站| 国产男人的电影天堂91| 9191精品国产免费久久| 伊人久久大香线蕉亚洲五| 另类亚洲欧美激情| 亚洲精品aⅴ在线观看| av网站在线播放免费| 老司机影院成人| 91成人精品电影| 性高湖久久久久久久久免费观看| 国产极品粉嫩免费观看在线| 午夜福利影视在线免费观看| 亚洲欧美色中文字幕在线| 天天操日日干夜夜撸| av网站在线播放免费| a 毛片基地| www.自偷自拍.com| 国产在线免费精品| 中文字幕人妻熟女乱码| 国产精品一区二区在线不卡| 国产黄色视频一区二区在线观看| 国产xxxxx性猛交| 人成视频在线观看免费观看| 高清av免费在线| 精品国产国语对白av| 亚洲av在线观看美女高潮| 免费高清在线观看日韩| 少妇人妻 视频| 国产av国产精品国产| 少妇熟女欧美另类| 国产午夜精品一二区理论片| 午夜福利网站1000一区二区三区| 成人国产av品久久久| 18禁动态无遮挡网站| 日韩精品免费视频一区二区三区| 久久国产精品大桥未久av| 观看av在线不卡| 日本av手机在线免费观看| 纯流量卡能插随身wifi吗| 国产成人精品在线电影| 欧美bdsm另类| 日韩欧美精品免费久久| 国产乱人偷精品视频| 国产高清不卡午夜福利| 99香蕉大伊视频| 亚洲色图综合在线观看| 久久人人97超碰香蕉20202| 亚洲精品一二三| 日本色播在线视频| 国产老妇伦熟女老妇高清| 久久久精品免费免费高清| 寂寞人妻少妇视频99o| 欧美日韩亚洲国产一区二区在线观看 | 这个男人来自地球电影免费观看 | 国产精品一国产av| 成人18禁高潮啪啪吃奶动态图| 一级a爱视频在线免费观看| 80岁老熟妇乱子伦牲交| 久久久久久久久久久久大奶| 一级毛片 在线播放| 久久97久久精品| 天天躁夜夜躁狠狠躁躁| 看免费成人av毛片| 国产熟女午夜一区二区三区| 国产免费视频播放在线视频| 99久久人妻综合| 国产av码专区亚洲av| 成人国产av品久久久| 久热这里只有精品99| 天天躁夜夜躁狠狠久久av| 国产亚洲午夜精品一区二区久久| 亚洲欧美精品综合一区二区三区 | 中文字幕人妻丝袜一区二区 | 久久久久久久久久久久大奶| 久久久久视频综合| 曰老女人黄片| 亚洲人成网站在线观看播放| 久久99精品国语久久久| 日韩成人av中文字幕在线观看| 狠狠精品人妻久久久久久综合| 久久女婷五月综合色啪小说| 美女国产视频在线观看| 日本欧美国产在线视频| 老汉色av国产亚洲站长工具| 十分钟在线观看高清视频www| 亚洲欧美一区二区三区国产| 久久这里有精品视频免费| 在线观看免费日韩欧美大片| 黄色毛片三级朝国网站| 国产野战对白在线观看| 欧美精品一区二区大全| 久久久久人妻精品一区果冻| 婷婷色av中文字幕| 性高湖久久久久久久久免费观看| 一本色道久久久久久精品综合| 9热在线视频观看99| 在线精品无人区一区二区三| 欧美bdsm另类| 亚洲精品国产av成人精品| 在线观看免费视频网站a站| 91午夜精品亚洲一区二区三区| 午夜免费鲁丝| 国产爽快片一区二区三区| 国产精品一区二区在线观看99| 日日撸夜夜添| 在现免费观看毛片| 宅男免费午夜| 亚洲av成人精品一二三区| 婷婷色综合www| 亚洲人成网站在线观看播放| 这个男人来自地球电影免费观看 | 日本猛色少妇xxxxx猛交久久| 亚洲av综合色区一区| 国产精品国产av在线观看| 亚洲天堂av无毛| av不卡在线播放| 日本黄色日本黄色录像| av片东京热男人的天堂| 国产成人aa在线观看| 亚洲国产精品999| 在线观看三级黄色| 女人被躁到高潮嗷嗷叫费观| 免费观看av网站的网址| 丰满乱子伦码专区| 18禁动态无遮挡网站| 寂寞人妻少妇视频99o| 人人妻人人添人人爽欧美一区卜| 涩涩av久久男人的天堂| 日产精品乱码卡一卡2卡三| av国产久精品久网站免费入址| 考比视频在线观看| 精品国产乱码久久久久久小说| 国产精品久久久久久精品电影小说| 看十八女毛片水多多多| 欧美国产精品一级二级三级| 天天躁狠狠躁夜夜躁狠狠躁| 国产精品一国产av| 下体分泌物呈黄色| 黑丝袜美女国产一区| 在线观看美女被高潮喷水网站| 国产又爽黄色视频| 精品一区二区免费观看| 婷婷色av中文字幕| 国产精品 国内视频| 美女福利国产在线| 国产午夜精品一二区理论片| 美女国产视频在线观看| 一二三四在线观看免费中文在| 国产在线免费精品| 国产亚洲最大av| 国产又爽黄色视频| 色婷婷av一区二区三区视频| 美女视频免费永久观看网站| 伊人亚洲综合成人网| 午夜老司机福利剧场| 国产探花极品一区二区| 色吧在线观看| 亚洲国产成人一精品久久久| 成人毛片a级毛片在线播放| 久久 成人 亚洲| av视频免费观看在线观看| 中国三级夫妇交换| 天天躁夜夜躁狠狠躁躁| 亚洲精品久久久久久婷婷小说| 老司机亚洲免费影院| 女性生殖器流出的白浆| 综合色丁香网| 秋霞伦理黄片| 大码成人一级视频| 国产日韩欧美亚洲二区| 国产精品偷伦视频观看了| 日韩欧美精品免费久久| 久久久久精品人妻al黑| 99热网站在线观看| 亚洲国产欧美日韩在线播放| 日韩人妻精品一区2区三区| av国产久精品久网站免费入址| 亚洲国产精品999| 亚洲国产欧美在线一区| 中文字幕亚洲精品专区| 超色免费av| 亚洲国产色片| 欧美日韩综合久久久久久| 91在线精品国自产拍蜜月| 国产伦理片在线播放av一区| 免费久久久久久久精品成人欧美视频| 在线亚洲精品国产二区图片欧美| 久久狼人影院| 日本wwww免费看| 免费黄网站久久成人精品| av在线观看视频网站免费| www日本在线高清视频| 精品国产乱码久久久久久男人| 久久青草综合色| 成人午夜精彩视频在线观看| 女人精品久久久久毛片| 99久久人妻综合| 两性夫妻黄色片| 一级黄片播放器| 亚洲av中文av极速乱| 女性被躁到高潮视频| 亚洲第一区二区三区不卡| 毛片一级片免费看久久久久| 伦精品一区二区三区| 男女国产视频网站| 亚洲国产毛片av蜜桃av| freevideosex欧美| 久久久久精品性色| 亚洲精品aⅴ在线观看| 春色校园在线视频观看| 国产一区二区激情短视频 | 777米奇影视久久| 精品99又大又爽又粗少妇毛片| 黄片无遮挡物在线观看| 九色亚洲精品在线播放| 成人毛片60女人毛片免费| 日韩免费高清中文字幕av| 欧美日韩综合久久久久久| 国产成人av激情在线播放| 另类亚洲欧美激情| 欧美少妇被猛烈插入视频| 成年动漫av网址| 爱豆传媒免费全集在线观看| 伊人久久国产一区二区| 人人妻人人澡人人看| 自线自在国产av| 久久这里只有精品19| 久久精品国产鲁丝片午夜精品| 五月伊人婷婷丁香| 国产日韩欧美视频二区| 久久精品国产亚洲av高清一级| 人妻 亚洲 视频| 亚洲国产av影院在线观看| 国产一级毛片在线| 日本av免费视频播放| 欧美日韩亚洲高清精品| 婷婷色综合大香蕉| 中文字幕制服av| 亚洲国产av新网站| 99久久人妻综合| 美女国产视频在线观看| 老女人水多毛片| 免费不卡的大黄色大毛片视频在线观看| 十八禁高潮呻吟视频| 国产成人91sexporn| 97在线人人人人妻| 国产成人免费观看mmmm| 99久久人妻综合| 亚洲综合色惰| 亚洲第一av免费看| √禁漫天堂资源中文www| 欧美老熟妇乱子伦牲交| 久久ye,这里只有精品| av视频免费观看在线观看| 久久久久人妻精品一区果冻| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 十分钟在线观看高清视频www| 国产野战对白在线观看| 国产一区亚洲一区在线观看| 亚洲三级黄色毛片| 黑人猛操日本美女一级片| 国产午夜精品一二区理论片| 黑人欧美特级aaaaaa片| 中文字幕人妻丝袜一区二区 | 最近中文字幕高清免费大全6| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 丝袜美足系列| 大陆偷拍与自拍| 99久久精品国产国产毛片| 色网站视频免费| 亚洲精品久久午夜乱码| 午夜福利乱码中文字幕| 啦啦啦在线观看免费高清www| 一个人免费看片子| 波多野结衣一区麻豆| 在线观看www视频免费| 国产 一区精品| 伊人久久大香线蕉亚洲五| 色吧在线观看| 久久精品熟女亚洲av麻豆精品| 欧美少妇被猛烈插入视频| 在线观看免费视频网站a站| 国产亚洲最大av| 午夜福利,免费看| 久久精品久久精品一区二区三区| av有码第一页| 日本欧美国产在线视频| 亚洲人成电影观看|