• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Graph-Based Chinese Word Sense Disambiguation with Multi-Knowledge Integration

    2019-11-07 03:12:32WenpengLuFanqingMengShoujinWangGuoqiangZhangXuZhangAntaiOuyangandXiaodongZhang
    Computers Materials&Continua 2019年10期

    Wenpeng Lu,Fanqing Meng,Shoujin Wang,Guoqiang Zhang,Xu ZhangAntai Ouyang and Xiaodong Zhang

    Abstract:Word sense disambiguation(WSD)is a fundamental but significant task in natural language processing,which directly affects the performance of upper applications.However,WSD is very challenging due to the problem of knowledge bottleneck,i.e.,it is hard to acquire abundant disambiguation knowledge,especially in Chinese.To solve this problem,this paper proposes a graph-based Chinese WSD method with multi-knowledge integration.Particularly,a graph model combining various Chinese and English knowledge resources by word sense mapping is designed.Firstly,the content words in a Chinese ambiguous sentence are extracted and mapped to English words with BabelNet.Then,English word similarity is computed based on English word embeddings and knowledge base.Chinese word similarity is evaluated with Chinese word embedding and HowNet,respectively.The weights of the three kinds of word similarity are optimized with simulated annealing algorithm so as to obtain their overall similarities,which are utilized to construct a disambiguation graph.The graph scoring algorithm evaluates the importance of each word sense node and judge the right senses of the ambiguous words.Extensive experimental results on SemEval dataset show that our proposed WSD method significantly outperforms the baselines.

    Keywords:Word sense disambiguation,graph model,multi-knowledge integration,word similarity.

    1 Introduction

    The ambiguous words are ubiquitous in human languages,which leads to a huge confusion for natural language processing(NLP).Word sense disambiguation is to determine the meaning of a word according to its context,which is a fundamental task in NLP that directly affect the upper applications,e.g.,machine translation,information retrieval,text categorization and automatic summarization[Raganato,Camacho-Collados and Navigli(2017);Lu,Wu,Jian et al.(2018);Xiang,Li,Hao et al.(2018)].

    The existing WSD methods are divided into three categories:supervised,unsupervised and knowledge-based methods.Supervised method trains the classifiers with machine learning on sense-annotated corpus,which are utilized to judge the senses of new instances[Raganato,Bovi and Navigli(2017)].Though supervised method can achieve the best disambiguation performance,its effectiveness depends on the size and quality of the sense annotated corpus.Due to the limitation of annotated corpus,supervised method is hard to be applied on a large-scale WSD task.Unsupervised method distinguishes the categories of word senses according to their context with clustering technology,which can only different sense categories instead of senses and can not annotate each instance with its accurate sense[Panchenko,Ruppert,Faralli et al.(2017)].Knowledge-based method judges the sense of each instance according to its context and various knowledge bases.Though the performance of knowledge-based method is not better than that of supervised one,it can utilize all kinds of existing knowledge bases and can achieve a better coverage[Raganato,Camacho-Collados and Navigli(2017)].Knowledge-based method is the unique method which is available on large-scale WSD tasks and has achieved better performance in SemEval[Moro and Navigli(2015);Navigli and Ponzetto(2012);Raganato,Camacho-Collados and Navigli(2017);Chen,Liu and Sun(2014)].The existing knowledge bases contain abundant semantic relationships,which can form a huge semantic graph and are beneficial to WSD.Graph-based WSD is a representative one of knowledge-based methods,which is the most popular method and has attracted more and more attention in NLP field[Dongsuk,Kwon,Kim et al.(2018);Duque,Stevenson,Martinez-Romo et al.(2018);Meng,Lu,Zhang et al.(2018)].Graph-based WSD constructs the disambiguation graph according to semantic knowledge relationships,whose performance is affected greatly by the size and quality of knowledge resources.Knowledge acquisition bottleneck is the key factor that limits its development,which is more serious in Chinese due to the rareness of Chinese semantic knowledge resources[Lu(2018)].

    The traditional graph-based Chinese WSD method usually utilizes one kind of Chinese knowledge resource,which is extremely troubled with the problem of knowledge bottleneck[Lu,Huang and Wu(2013);Yang and Huang(2012)].Compared with knowledge resources in Chinese,those in English are more mature and abundant.If we can integrate various Chinese and English knowledge resources together,which can complement each other,we can fully exploit all kinds of disambiguation knowledge.This shows the potential to significantly improve the performance of Chinese WSD.

    Apparently,how to integrate the existing Chinese and English knowledge resources is actually highly challenging,as the senses of them are not mapped to each other.Besides,how to evaluate the overall similarities of sense pairs is difficult,as the relative importance of each knowledge resource is unknown for us.Inspired by the significant progress made on representation learning and optimization algorithm in various tasks such as sentence representation[Mikolov,Sutskever,Chen et al.(2013);Subramanian,

    Trischler,Bengio et al.(2018)]and simulated annealing optimization[Mafarja and Mirjalili(2017);Mamano andHayes(2017)],this work integrates the existing English and Chinese knowledge resources and optimizes their weights to construct a knowledge graph so as to disambiguate the ambiguous words in Chinese.The main idea and contributions are as follows:

    ●We propose a novel knowledge integration method,which merge the English and Chinese knowledge resources together by sense definition alignment with the help of sentence representation.The method is flexible,which can integrate various knowledge conveniently.

    ●We propose a simulated annealing algorithm to optimize the weights of various knowledge.With the optimized weights,the semantic relationships between senses are evaluated to construct an overall knowledge graph.

    ●To the best of our knowledge,this is the first work on graph-based Chinese WSD with multiple-knowledge integration.This work maps and integrates a variety of English knowledge resources into Chinese,and optimizes their weights with simulated annealing algorithm to compute similarities of sense pairs.According to the senses and their similarities,an overall knowledge graph is constructed,where the graph scoring algorithm evaluates the importance of the sense nodes to judge the right sense.

    Extensive experiments on SemEval WSD task are conducted to evaluate our proposed method.The result shows that our method substantially defeat the existing methods,with at least 2.4% improvement.

    The rest of this paper is organized as follows:Section 2 discusses the related work and gives a brief summary in WSD.Section 3 details the proposed graph-based Chinese WSD with multi-knowledge integration,where each key module is described.Section 4 provides the empirical results by comparing our method with the baselines.Finally,we conclude this work and provide future work in Section 5.

    2 Related work

    Graph-based WSD methods are inspired by the lexical chain,which refers to a sequence of semantic related words in a given text,that are linked together by lexical semantic relations,e.g.,eat → apple → fruit → banana.Graph-based WSD is the most popular method in knowledge-based WSD,which constructs a knowledge graph with senses as nodes and semantic relations as edges.Based on the structure of knowledge graph,the right sense is selected[Dongsuk,Kwon,Kim et al.(2018)].

    Galley et al.[McKeown and Galley(2003)]have proposed a WSD method based on lexical chain,introduced as follows.Firstly,when constructing the disambiguation graph,all possible sense is added to the graph as nodes,then the words in the ambiguous sentence are processed one by one.If there exists a semantic relationship between the current word and the processed ones,this relationship is added to the graph as an edge,which is assigned a weight according to the type of relationship and the distance.After the graph is constructed,the weights of sense nodes of ambiguous words are summed and the sense with the greatest weight is selected as the right sense.The method achieves 62.1% accuracy on SemCor noun dataset.

    Mihalcea[Mihalcea(2004)]has proposed a WSD method based on PageRank algorithm,which takes all the senses of the words as the nodes and the semantic relationships between the words as the edges,to construct the disambiguation graph.Pagerank algorithm is applied on the graph to evaluate the importance of each sense node to judge the right sense.Agirre et al.propose personalized PageRank for WSD[Agirre and Soroa(2009)],which pays more attention on some words and improves the evaluation of sense importance.

    Navigli et al.[Navigli and Velardi(2005)]propose a structural semantic interconnections(SSI)algorithm for WSD,which creates structural specifications of the possible senses for each word and constructs the grammar rules to describe the interconnection relations.The most suitable sense is selected according to the grammar.SSI achieves the best performance in Senseval-3 and SemEval-2007.

    Yang et al.[Yang and Huang(2012)]propose a graph-based WSD method based on word distance,which strengthens the influence of near words and weakens that of far words when evaluating the importance of sense nodes in the graph.Lu et al.[Lu,Huang and Wu(2014)]propose a graph-based WSD method based on domain knowledge,which integrates domain knowledge into the disambiguation framework and improves multiple graph scoring algorithms.

    Traditional graph-based method try to construct the subgraph of all words in a sentence,which may induce some noise information[Navigli and Lapata(2010)].To avoid the problem,Dongsuk et al.[Dongsuk,Kwon,Kim et al.(2018)]propose a WSD method based on subgraph reconstruction,where context words of an ambiguous word for constructing the subgraph are selected with a word similarity threshold.The word similarity is computed based on an embedding generated by Doc2Vec[Le and Mikolov(2014)],which encodes information of the semantic relational path of words in BabelNet[Navigli and Ponzetto(2012)].

    The above existing graph-based WSD methods construct the disambiguation graph according to some lexical knowledge resources,e.g.,WordNet,BabelNet and HowNet[Miller(1995);Navigli and Ponzetto(2012);Zhendong and Qiang(2006)].Most of them only utilize one kind of knowledge resource.As the limitation of size and quality of the resource,the graph-based methods are confused with knowledge bottleneck.Apparently,the knowledge resources are different and complementary.It is necessary to integrate as many as existing resources to strengthen the ability of WSD systems.Comparing with English,the available Chinese semantic resources are more rare,which makes the problem more critical.How to integrate the existing various semantic resources to improve the performance of Chinese WSD is an important issue that is waiting to be solved.

    3 The proposed WSD method

    In this section,we describe the framework of graph-based WSD method and its key modules in detail.With the framework,for sense pairs of Chinese words,the English knowledge resources are utilized to compute their similarities together with the Chinese resources.The weights of the similarities are optimized with simulated annealing algorithm.The disambiguation graph is constructed with senses as nodes,semantic relations as edges and similarities as their weights,where graph algorithm is utilized to score each sense node and select the right sense.The framework and its key modules are introduced as follows.

    Figure 1:Model architecture of sentence matching

    3.1 Framework of the WSD method

    The framework of our proposed graph-based WSD with multi-knowledge integration is shown in Fig.(1).The content words in a Chinese sentence are extracted and mapped into English words with BabelNet.By the mapping,the resources in English would be available for Chinese words.Then,according to English and Chinese knowledge resources,three kinds of word similarity are computed,whose weights are optimized with simulated annealing algorithm so as to obtain overall similarities to construct the disambiguation graph.The importance score of each sense node in the graph is evaluated to select the right senses of ambiguous words.The detailed framework is described as follows:

    (1)Extract the content words after preprocessing the Chinese ambiguous sentence.

    (2)Map Chinese word senses into English ones[Meng,Lu and Xue(2017)].

    (3)Compute word similarity based on English word embeddings and knowledge bases,e.g.,Wikipedia,BabelNet,Gigaword[Parker,Graff,Kong et al.(2011)].

    (4)Compute word similarity based on Chinese word embeddings trained on Sogou corpus3.

    (5)Compute word similarity based on HowNet[Zhendong and Qiang(2006)].

    (6)Optimize the relative weights of the above three kinds of word similarity with simulated annealing algorithm so as to obtain an overall similarities.

    (7)Take word senses as nodes,semantic relations as edges,overall similarities as weights of edges,to construct the disambiguation knowledge graph.

    (8)Evaluate the importance of each sense node in the graph with graph scoring algorithm to select the right sense.

    As shown in Fig.1.sense mapping module,three modules of word similarity,weight optimization module,graph construction and scoring module are the key components of our proposed methods,which are explained in the following subsections.

    3.2 Word sense mapping

    As the rareness of Chinese semantic knowledge resources,mapping Chinese word sense to English ones and utilizing English resources to compensate the deficiency of Chinese resources is a practicable solution.In order to map the senses in Chinese and English semantic resources,we have proposed a method to map the senses between Chinese and English with BabelNet and English-Chinese dictionary[Meng,Lu and Xue(2017);Navigli and Ponzetto(2012);Ke(2011)].

    For each English sense,BabelNet has provides a detailed definition with several short examples.Beside,the English-Chinese dictionary,i.e.,Collins COBUILD Advanced Learner's English-Chinese Dictionary,has provided the detailed bilingual definitions with bilingual examples.That is,both BabelNet and Collins dictionary have provided English description for each sense,and the latter also provides the corresponding Chinese sense annotation.If an English sense is corresponding with a Chinese sense,the meaning of their English definitions or examples should be similar.This is an important and key clues to find and verify the mapping relations among Chinese and English senses.

    With this in mind,we generate embedding representations for the English definitions and examples.According to their cosine similarities,we find the corresponding relationships among BabelNet and Collins definitions in English.Then,as Collins English-Chinese dictionary provides English and Chinese definition simultaneously,we can further obtain the mapping relations between English and Chinese.The detailed implementation is introduced as follows.

    Firstly,for each Chinese sense,its possible candidate English senses are prepared according to HowNet or Chinese-English dictionary.Secondly,for the candidate English senses,we get their definitions and examples according to BabelNet,and collect the bilingual definitions and example according to an English-Chinese bilingual dictionary.Thirdly,inspired by related work with Word2vec[Mikolov,Sutskever,Chen et al.(2013);Le and Mikolov(2014)],we generate embedding representation for each sentence in definitions and examples.Finally,the cosine similarities among embedding representations are computed to find the corresponding English sense for each Chinese sense.Once the senses are mapped between Chinese and English knowledge resources,we can utilize English semantic resources to assist Chinese WSD tasks,which will provide great convenience for Chinese WSD.Our another paper has described the above procedures carefully,whose F1-measure reaches 75.75%[Meng,Lu and Xue(2017)].

    3.3 Word similarity based on English word embeddings and knowledge base

    After the mapping processing in last subsection,the Chinese and English senses are mapped each other.Then,the English knowledge resources can be utilized to disambiguate the words in Chinese.When the disambiguation graph is constructed,each semantic relation between sense nodes need to be assigned a reasonable weight,which should consider as more as information.In this subsection,the information from English knowledge resources is considered.

    We have realized a method for word similarity computation based on English word embeddings and knowledge base,as described in Meng et al.[Meng,Lu,Zhang et al.(2017)].The method has participated in SemEval-2017 Task 24,i.e.,multilingual and cross-lingual semantic word similarity[Camacho-Collados,Pilehvar,Collier et al.(2017)].In the competition,our method has reached 0.778 on official evaluation measure,which wins the second place on English monolingual word similarity subtask[Meng,Lu,Zhang et al.(2017)].

    In SemEval 2017 Task 25http://alt.qcri.org/semeval2017/task2/,i.e.,multilingual and cross-lingual semantic word similarity,we have proposed a word similarity based on English word embeddings and knowledge base[Meng,Lu,Zhang et al.(2017)],whose performance reaches 0.781,which wins the second place on English monolingual word similarity subtask.Since the competition system has achieved the excellent performance,we integrate it into our proposed WSD framework,and utilize it to compute the word similarity based on English knowledge resources,which are introduced as follows.

    The method is a combination method consisting of two basic modules,which are the similarity based on word embedding and the similarity based on knowledge base,i.e.,BabelNet.For the former,Word2Vec toolkit6https://code.google.com/p/word2vec/is used to train word embedding on English wikipedia corpus[Mikolov,Sutskever,Chen et al.(2013)].With the embeddings of each word pair,their cosine similarity is computed.For the latter,BabelNet7https://babelnet.org/contains a large number of concepts and semantic relations,such as synonymy,hypernymy and meronymy.With BableNet API,we can obtain all of the semantic relations among two words.According to the shortest path,the similarity of word pair is computed.The similarity based on word embedding and the similarity based on knowledge base are linearly weighted accumulated as the overall similarity based on English knowledge resources.Our another paper has introduced the implementation in detail[Meng,Lu,Zhang et al.(2017)].The method is flexible,which can combine more knowledge resources.

    3.4 Word similarity based on Chinese word embeddings

    In the last subsection,with the support of word sense mapping,word similarity based on English word embedding is integrated into our WSD framework.Since we aim at the disambiguation problem in Chinese,word similarity based on Chinese word embeddings is crucial and necessary.

    As Word2Vec has demonstrated a powerful ability in various tasks[Mikolov,Sutskever,Chen et al.(2013)],we continue to utilize it to generate Chinese word embeddings,which is trained on Sogou news corpus8http://www.sogou.com/labs/resource/ca.php.With the Chinese word embeddings,we compute their cosine similarity as the word similarity based on Chinese word embeddings.

    3.5 Word similarity based on HowNet

    For word similarity of Chinese words,besides word embedding method in last subsection,HowNet9http://www.keenage.com/also provides the API interface to compute word similarity[Zhendong and Qiang(2006)].

    HowNet is a common semantic knowledge base,which describes the concepts in Chinese and English and the relationships among concepts and their attributions.There are about 800 lexemes in HowNet,which is the basic and smallest unit of meaning that can not be divided further.All concepts in HowNet are described with the basic lexemes.

    HowNet is widely applied in Chinese NLP field,which provides a convenient API interface,i.e.,Hownet_GET_Concept_Similarity,to compute the semantic similarity between two concepts[Yang and Huang(2012)].The similarity has considered multiple relationships from HowNet,including four kinds of primitive lexeme similarities[Qun and Sujian(2002)],which are computed according to their path distance on HowNet hierarchical structure.

    3.6 Weight optimization with simulated annealing algorithm

    The above three similarity methods compute word similarities with different semantic knowledge resources,which are complementary each other.In order to fully utilize their respective advantages,we propose a weight optimization algorithm with simulated annealing algorithm to automatically decide the weight parameters of the three similarities,which are used to linearly combine them so as to obtain a more reasonable overall similarity.The procedure to optimize the weight parameters is shown in Algorithm 1.The core of simulated annealing algorithm for weight optimization is described as:

    whereresult(x)is the target function,i.e.,the disambiguation accuracy,δis cooling rate,tis the temperature.If the result of new parametersxnewis better than that ofxold,the new parameters would be selected with a probability value of 1.Otherwise,the new parameters would be selected with a probability value of.

    In Algorithm 1,the parametersx,y,zis the weights of three kinds of word similarity,which need to be optimized.Line 1 is initialization operation,which sets the initial temperaturetas 100,the minimal cooling temperaturetminas 0.001,the cooling rateδas 0.98,the maximum iterationskas 100 in the experiments.Line 4-5 select a random double value tox,which affects the value ofz.In the Line 6,getEvalResultis the target function,which returns the disambiguation accuracy,given the parametersx,y,z.Line 7 generates an updated value toxnewfrom the neighbourhood ofx.Line 8-18 decide whether the parameterxis update with the newxnew,as described in Eq.(1).Line 19 changes the value oftwith the cooling rateδ.We obtain the three optimized weight parameters by running Algorithm 1 twice.In the first run,we set the value ofyas 1/3,and get the optimized weightsx,z.Then,we keep the minor one of them as the final weight parameter,and run the algorithm again to get the other two weights.The parameters satisfy thatx+y+z=1,x ≥ 0,y ≥ 0,z ≥ 0.

    After the weight parameters are optimized,the final overall word similarity is decides as:

    wherewsandws'are two senses,simenis the word similarity based on English word embeddings and knowledge base,simvecis the word similarity based on Chinese word embeddings,simhowis the word similarity based on HowNet,their optimized weight parameters arex,y,z,respectively.

    3.7 Disambiguation graph construction

    In order to construct the disambiguation graph,we take word senses as nodes,semantic relationships as edges,the overall word similarities as the weights of edges.

    As we utilizes Chinese and English knowledge resources,the senses are represented with a triple,i.e.,Word(ID,Sword,Enword).IDis the ID of a sense or concept.Swordis the first primitive lexeme of concept definition in HowNet.Enwordis its corresponding description in English,i.e.,the mapping from Chinese to English.With the representation form of triples,we can easily integrate the three kinds of word similarity.For example,“中醫(yī)” has two senses,which can be represented as “中醫(yī)(157329,人,practitioner of Chinese medicine)” and “中醫(yī)(157332,知識,traditional Chinese science)”,whose word similarity with other senses can be computed with Eq.(2).

    3.8 Graph scoring algorithm

    PageRank algorithm is selected to evaluate and score the importance of each sense node in the disambiguation graph.If a sense node connects with more nodes with higher importance,its importance is more higher,which means that the sense is more related with the context words.With the algorithm iterations,the importance of each node is gradually differentiated.The sense node with maximum importance would be selected as the right sense of ambiguous word.The importance of each node is updated with the following equation:

    wherevrefers to a sense node,αindicates the probability to continue the current Markov chain,1-αindicates the probability of randomly selecting another node instead of continuing current Markov chain,N is the total numbers of sense nodes,|out(u)|refers to the out-degree of nodeu,in(v)indicates the set of all nodes that link with the nodev.

    4 Experiments

    4.1 Data sets and evaluation measure

    The benchmark dataset is SemEval task#5,i.e.,multilingual Chinese English lexical sample task[Jin,Wu and Yu(2007)],which consists of 19 nouns and 21 verbs.Both training and test corpus are provides.The detail information of this dataset is shown in Tab.1.Our proposed WSD method is unsupervised,which only utilizes test instances instead of training ones.

    Table 1:Summary of SemEval dataset

    Macro-averagepmaris selected to evaluate the performance of WSD methods,which is defined as:

    whereNis the number of all word-types,miis the number of disambiguated correctly to one specific word-type,niis the number of all test instances of this word-type.

    4.2 Baselines

    Our proposed graph-based WSD with multi-knowledge integration is compared with four different types of representative unsupervised WSD methods,i.e.,TorMd,HowGraph,EnGraph,SogouGraph[Jin,Wu and Yu(2007);Meng(2018)].

    · TorMd:An unsupervised WSD method proposed by the University of Toronto,which wins the first place in this SemEval competition.

    · HowGraph:This is an abridgment version of our proposed method,which only utilizes the word similarity based on HowNet.

    · EnGraph:This also is an abridgment version of our proposed method,which only utilizes the word similarity based on English word embeddings and knowledge base.

    · SogouGraph:This is another abridgment version of our proposed method,which only utilizes the word similarity based on Chinese word embeddings trained on Sogou corpus.

    · MultiGraph:This is the full version of our proposed method,which integrates three kinds of word similarity.In the experiments,for word similarity based on HowNet,English word embeddings and knowledge base,Chinese word embeddings,the optimized weight parameters are 0.028,0.336 and 0.636,respectively.

    4.3 Results and analysis

    4.3.1 Comparison of overall performances

    The performance comparison of all methods is shown in Tab.2.All the graph-based methods,including our proposed method and its three abridgment versions,outperforms TorMd method,which demonstrates that graph-based methods are potential to achieve a significant improvement.

    MultiGraph shows 6.1%,2.4%,4.1% and 3.6% improvement over the TorMd,HowGraph,EnGraph,SogouGraph.The results demonstrate that utilizing single knowledge resource will hurt the performance and show that our proposed WSD framework integrating multiple resources is powerful.In order to achieve a satisfied performance,it is necessary to integrate as many as knowledge resources in graph-based WSD.

    Table 2:Comparison of overall performances.Imp refers to the improvement made by MultiGraph over the corresponding method

    4.4 Comparison of noun performances

    The performance comparison on nouns is shown in Tab.3.MultiGraph achieves the best performance,which demonstrates 4.5%,4.4%,3.5%,4.3% improvement over TorMd,HowGraph,EnGraph and SogouGraph.As shown in Tab.3,HowGraph,EnGraph and SogouGraph have different advantages on different words,while MultiGraph integrates their advantages so as to improve the noun performance greatly.

    Table 3:Comparison of noun performances.For each word,its best performance is boldfaced

    4.5 Comparison of verb performances

    The performance comparison on verbs is shown in Tab.4.MultiGraph still achieves the best performance,which demonstrates 9.4%,0.5%,4.6%,3.0% improvement over TorMd,HowGraph,EnGraph and SogouGraph.The significant improvements demonstrate that the effectiveness of our proposed graph-based WSD method with multiknowledge integration,i.e.,MultiGraph.

    Table 4:Comparison of verb performances

    5 Conclusion

    This work proposes a novel graph-based Chinese WSD method with multi-knowledge integration.Different from the existing knowledge-based methods,our methods utilize Chinese and English semantic knowledge simultaneously to disambiguation words.Three different kinds of word similarity from various knowledge resources are optimized with simulated annealing algorithm,and integrated to compute an overall similarity.With sense as nodes,semantic relations as edges and the overall similarities as weights of edges,the disambiguation graph is constructed,which is evaluated with graph scoring algorithm to select the right senses.Extensive experiments on SemEval dataset shows that the proposed method significantly outperforms four baselines.In this work,we only use three kind of knowledge resources,which only makes a trial to integrate multilanguage knowledge resources.Our future work is to find more semantic resources and design more sophisticated integration methods for graph-based WSD.

    Acknowledgement:The research work is supported by National Key R&D Program of China under Grant No.2018YFC0831704,National Nature Science Foundation of China under Grant No.61502259 and Natural Science Foundation of Shandong Province under Grant No.ZR2017MF056,and Taishan Scholar Program of Shandong Province in China(Directed by Prof.Yinglong Wang).

    91午夜精品亚洲一区二区三区| 成人欧美大片| 天美传媒精品一区二区| 精品一区二区免费观看| 亚洲综合色惰| 高清日韩中文字幕在线| 久久精品影院6| 天堂√8在线中文| 我要看日韩黄色一级片| 观看美女的网站| 国产精品久久久久久av不卡| 国产亚洲精品久久久com| 一进一出抽搐动态| 亚洲成a人片在线一区二区| 久久精品国产亚洲av天美| 日本熟妇午夜| 亚洲美女视频黄频| 校园人妻丝袜中文字幕| 午夜精品国产一区二区电影 | 人妻制服诱惑在线中文字幕| 欧美日韩精品成人综合77777| 18禁在线播放成人免费| 老司机午夜福利在线观看视频| 亚洲欧美成人精品一区二区| 国产精品无大码| 最近视频中文字幕2019在线8| 亚洲人与动物交配视频| av在线老鸭窝| 午夜精品国产一区二区电影 | 亚洲成人久久爱视频| 一级黄片播放器| 免费在线观看成人毛片| 亚洲精品456在线播放app| 成人美女网站在线观看视频| 日韩,欧美,国产一区二区三区 | 国产黄色小视频在线观看| av卡一久久| 成人性生交大片免费视频hd| 国产日本99.免费观看| 国产久久久一区二区三区| 亚洲丝袜综合中文字幕| 少妇熟女aⅴ在线视频| 夜夜看夜夜爽夜夜摸| 亚洲真实伦在线观看| 99久久精品热视频| 国产一区二区亚洲精品在线观看| 18禁在线无遮挡免费观看视频 | 悠悠久久av| 国产午夜精品久久久久久一区二区三区 | av在线蜜桃| 亚洲av免费在线观看| 国产高清视频在线观看网站| 久久久成人免费电影| 久久精品国产鲁丝片午夜精品| 久久久久久久久久久丰满| 级片在线观看| 日本黄大片高清| 蜜臀久久99精品久久宅男| 久久久精品94久久精品| 美女高潮的动态| 伦精品一区二区三区| 国产一区二区三区av在线 | 哪里可以看免费的av片| 毛片女人毛片| 亚洲人成网站在线观看播放| 亚洲精品日韩av片在线观看| 美女内射精品一级片tv| 淫妇啪啪啪对白视频| 精品一区二区三区视频在线观看免费| 99热只有精品国产| 亚洲国产色片| 国产成人一区二区在线| 少妇丰满av| 91久久精品国产一区二区成人| 99热这里只有精品一区| .国产精品久久| 色综合亚洲欧美另类图片| 成人无遮挡网站| 亚洲精品乱码久久久v下载方式| 免费av不卡在线播放| 菩萨蛮人人尽说江南好唐韦庄 | 国产成年人精品一区二区| 国产亚洲精品久久久com| 啦啦啦观看免费观看视频高清| 国产爱豆传媒在线观看| 乱人视频在线观看| 精品午夜福利视频在线观看一区| 国产视频一区二区在线看| 日韩欧美三级三区| 最后的刺客免费高清国语| 国产又黄又爽又无遮挡在线| 国产乱人视频| 日韩成人av中文字幕在线观看 | 免费高清视频大片| 99热这里只有是精品在线观看| 日日摸夜夜添夜夜添av毛片| 一级a爱片免费观看的视频| 女人十人毛片免费观看3o分钟| 99热这里只有精品一区| 人妻夜夜爽99麻豆av| 丰满乱子伦码专区| 秋霞在线观看毛片| 国内精品久久久久精免费| 免费高清视频大片| 亚洲自偷自拍三级| 欧美日韩乱码在线| 亚洲va在线va天堂va国产| 夜夜夜夜夜久久久久| 内地一区二区视频在线| 日韩欧美国产在线观看| 亚洲欧美日韩高清在线视频| 搞女人的毛片| 国内少妇人妻偷人精品xxx网站| 国产精品爽爽va在线观看网站| 性欧美人与动物交配| 搡老熟女国产l中国老女人| 18禁黄网站禁片免费观看直播| 狠狠狠狠99中文字幕| 国产精品久久电影中文字幕| 女同久久另类99精品国产91| 综合色av麻豆| 亚洲久久久久久中文字幕| 黄色日韩在线| АⅤ资源中文在线天堂| 欧美成人免费av一区二区三区| 人人妻人人澡人人爽人人夜夜 | 欧美色视频一区免费| 淫妇啪啪啪对白视频| 日韩av不卡免费在线播放| 日本免费a在线| 精品99又大又爽又粗少妇毛片| 在线a可以看的网站| 成人国产麻豆网| 男女边吃奶边做爰视频| 99久国产av精品| 麻豆一二三区av精品| 1000部很黄的大片| av视频在线观看入口| 国产大屁股一区二区在线视频| 看免费成人av毛片| 一级毛片aaaaaa免费看小| 观看美女的网站| 久久午夜亚洲精品久久| 内地一区二区视频在线| 久久亚洲国产成人精品v| 三级男女做爰猛烈吃奶摸视频| 久久久久久久久久黄片| 一级黄片播放器| 看免费成人av毛片| 久久中文看片网| 国产亚洲精品久久久com| 免费观看精品视频网站| 一级a爱片免费观看的视频| 亚洲va在线va天堂va国产| 麻豆一二三区av精品| av国产免费在线观看| 99riav亚洲国产免费| 免费人成在线观看视频色| 免费看a级黄色片| 卡戴珊不雅视频在线播放| 久久久久久久亚洲中文字幕| 亚洲欧美日韩无卡精品| 亚洲精品国产av成人精品 | 国产av不卡久久| 啦啦啦啦在线视频资源| 免费av不卡在线播放| 婷婷六月久久综合丁香| 1024手机看黄色片| 久久久久国产网址| 99热只有精品国产| 日本黄色片子视频| 国内精品美女久久久久久| 噜噜噜噜噜久久久久久91| 免费看av在线观看网站| 伊人久久精品亚洲午夜| 国产一区二区在线av高清观看| 99久国产av精品国产电影| 十八禁国产超污无遮挡网站| 成人亚洲欧美一区二区av| 直男gayav资源| 日日摸夜夜添夜夜添av毛片| av天堂在线播放| 最好的美女福利视频网| 中国国产av一级| 亚洲经典国产精华液单| 一级黄色大片毛片| 日本色播在线视频| 国产精品爽爽va在线观看网站| 亚洲国产精品成人久久小说 | 亚洲av免费高清在线观看| 成人鲁丝片一二三区免费| 在线播放无遮挡| 国产精品福利在线免费观看| 国产精品爽爽va在线观看网站| 国产麻豆成人av免费视频| 青春草视频在线免费观看| 午夜福利在线观看免费完整高清在 | 亚洲国产高清在线一区二区三| 国产成人91sexporn| 69av精品久久久久久| 欧美一级a爱片免费观看看| 午夜福利18| 久久久久九九精品影院| 少妇熟女aⅴ在线视频| 亚洲熟妇中文字幕五十中出| 蜜桃久久精品国产亚洲av| 99久久久亚洲精品蜜臀av| 搡老妇女老女人老熟妇| 黄色欧美视频在线观看| 99久久精品热视频| 亚洲精品456在线播放app| 国产一级毛片七仙女欲春2| 免费看美女性在线毛片视频| 99热精品在线国产| 久久精品夜夜夜夜夜久久蜜豆| 亚洲成人久久爱视频| 网址你懂的国产日韩在线| 国产一区二区激情短视频| 国产精品久久电影中文字幕| 午夜影院日韩av| 99riav亚洲国产免费| 伦精品一区二区三区| 国产精品一区二区三区四区久久| 国产精品永久免费网站| 亚洲精品在线观看二区| 男人和女人高潮做爰伦理| 看免费成人av毛片| 偷拍熟女少妇极品色| 国产成人a区在线观看| 97人妻精品一区二区三区麻豆| 日本欧美国产在线视频| 人妻制服诱惑在线中文字幕| 成年女人永久免费观看视频| 国产久久久一区二区三区| 亚洲精品亚洲一区二区| 老司机影院成人| 色综合色国产| 久久久精品欧美日韩精品| 级片在线观看| 亚洲欧美成人综合另类久久久 | 免费观看的影片在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲国产欧美人成| 久久久久精品国产欧美久久久| 午夜福利在线在线| 99在线视频只有这里精品首页| 看十八女毛片水多多多| 嫩草影院精品99| 亚洲在线自拍视频| 香蕉av资源在线| 麻豆av噜噜一区二区三区| 永久网站在线| 国模一区二区三区四区视频| 成年版毛片免费区| 深爱激情五月婷婷| 亚洲三级黄色毛片| 日韩三级伦理在线观看| 亚洲国产精品国产精品| 久久精品国产自在天天线| 最近2019中文字幕mv第一页| 两个人的视频大全免费| 高清毛片免费观看视频网站| av天堂中文字幕网| 卡戴珊不雅视频在线播放| 久久亚洲国产成人精品v| 精品午夜福利在线看| 色av中文字幕| 日本五十路高清| 精品欧美国产一区二区三| 免费无遮挡裸体视频| 久久精品影院6| 久久精品国产鲁丝片午夜精品| 亚洲成人精品中文字幕电影| 久久久久性生活片| 大型黄色视频在线免费观看| 亚洲欧美日韩高清专用| 精品一区二区三区视频在线观看免费| 一级毛片我不卡| 亚洲性久久影院| 麻豆国产97在线/欧美| 老师上课跳d突然被开到最大视频| 亚洲精品成人久久久久久| 一级黄片播放器| 99视频精品全部免费 在线| 国国产精品蜜臀av免费| 精品久久久久久久末码| 国产精品,欧美在线| 亚洲aⅴ乱码一区二区在线播放| 老司机福利观看| 久久精品人妻少妇| 国产精品美女特级片免费视频播放器| 国内精品美女久久久久久| 嫩草影院入口| 老熟妇乱子伦视频在线观看| 最近中文字幕高清免费大全6| 看黄色毛片网站| 天堂动漫精品| 午夜精品一区二区三区免费看| 久久这里只有精品中国| 亚洲美女搞黄在线观看 | 中文字幕精品亚洲无线码一区| 日本-黄色视频高清免费观看| 69av精品久久久久久| 国产午夜精品论理片| 天堂动漫精品| 日韩大尺度精品在线看网址| 91麻豆精品激情在线观看国产| 99热这里只有精品一区| 久久国产乱子免费精品| 欧美丝袜亚洲另类| 波多野结衣高清无吗| 精品欧美国产一区二区三| 九九在线视频观看精品| 久久久久国内视频| 亚洲专区国产一区二区| 嫩草影院新地址| 国产精品人妻久久久久久| 久久精品影院6| 国产高清三级在线| 国产精品人妻久久久影院| 日本在线视频免费播放| 婷婷色综合大香蕉| 亚洲人成网站在线播| 久久国产乱子免费精品| 99久国产av精品| 亚洲乱码一区二区免费版| 午夜激情欧美在线| 婷婷精品国产亚洲av在线| 别揉我奶头~嗯~啊~动态视频| 日本黄色视频三级网站网址| 国产亚洲精品久久久com| 欧美区成人在线视频| 少妇猛男粗大的猛烈进出视频 | 成人国产麻豆网| 老司机午夜福利在线观看视频| 国产三级在线视频| 成年免费大片在线观看| 国产三级在线视频| 成人二区视频| 久久精品国产自在天天线| 晚上一个人看的免费电影| 国产熟女欧美一区二区| 天天躁日日操中文字幕| 少妇高潮的动态图| 一区二区三区四区激情视频 | 五月伊人婷婷丁香| 校园春色视频在线观看| 男女啪啪激烈高潮av片| 99热全是精品| 久久人人精品亚洲av| 少妇熟女欧美另类| 人人妻人人澡欧美一区二区| 日本黄色片子视频| 日本色播在线视频| 精品99又大又爽又粗少妇毛片| 婷婷色综合大香蕉| 欧洲精品卡2卡3卡4卡5卡区| 香蕉av资源在线| 国产在视频线在精品| 香蕉av资源在线| 精品久久久久久成人av| 3wmmmm亚洲av在线观看| 别揉我奶头 嗯啊视频| 婷婷六月久久综合丁香| 老师上课跳d突然被开到最大视频| 香蕉av资源在线| 精品人妻一区二区三区麻豆 | 精品99又大又爽又粗少妇毛片| 欧美bdsm另类| 校园春色视频在线观看| 丰满的人妻完整版| 亚洲在线自拍视频| 成年女人永久免费观看视频| 亚洲欧美成人精品一区二区| 男女之事视频高清在线观看| 国产白丝娇喘喷水9色精品| 丝袜美腿在线中文| 国产精品亚洲一级av第二区| 亚洲最大成人手机在线| 在线观看av片永久免费下载| 热99re8久久精品国产| 午夜福利在线在线| 亚洲精品456在线播放app| 精华霜和精华液先用哪个| 成年女人毛片免费观看观看9| 国产一区二区三区av在线 | 免费无遮挡裸体视频| 毛片一级片免费看久久久久| 插逼视频在线观看| 国产淫片久久久久久久久| 国内少妇人妻偷人精品xxx网站| 欧美xxxx性猛交bbbb| 亚洲精品在线观看二区| 国产精品久久久久久久久免| 一区福利在线观看| 久久久a久久爽久久v久久| 亚州av有码| 亚洲人成网站高清观看| 久久久久久久午夜电影| 欧美人与善性xxx| 一个人免费在线观看电影| 伊人久久精品亚洲午夜| 久久天躁狠狠躁夜夜2o2o| 国产高清不卡午夜福利| 欧美色视频一区免费| 久久热精品热| 久久久国产成人精品二区| 少妇人妻一区二区三区视频| 高清毛片免费看| 国产私拍福利视频在线观看| 青春草视频在线免费观看| 欧美一区二区国产精品久久精品| 欧美激情国产日韩精品一区| 性插视频无遮挡在线免费观看| 在线a可以看的网站| 午夜福利视频1000在线观看| 日韩一本色道免费dvd| 麻豆精品久久久久久蜜桃| 久久人人精品亚洲av| 午夜福利成人在线免费观看| 国产精品,欧美在线| 啦啦啦观看免费观看视频高清| 国产精品日韩av在线免费观看| 91在线观看av| 女人十人毛片免费观看3o分钟| h日本视频在线播放| 久久久久九九精品影院| 国产乱人视频| 老司机午夜福利在线观看视频| 国产高清激情床上av| 真实男女啪啪啪动态图| 九九热线精品视视频播放| 搡老熟女国产l中国老女人| av在线播放精品| 变态另类成人亚洲欧美熟女| 在现免费观看毛片| 午夜福利在线在线| 国产91av在线免费观看| 亚洲自偷自拍三级| 亚洲人成网站在线观看播放| 天美传媒精品一区二区| 国产老妇女一区| 99在线视频只有这里精品首页| 国产精品久久久久久亚洲av鲁大| 色av中文字幕| 免费不卡的大黄色大毛片视频在线观看 | 久久精品夜夜夜夜夜久久蜜豆| 老司机影院成人| 不卡视频在线观看欧美| 俄罗斯特黄特色一大片| 老熟妇乱子伦视频在线观看| 18禁在线无遮挡免费观看视频 | 免费搜索国产男女视频| 精品熟女少妇av免费看| 99热6这里只有精品| 久久国产乱子免费精品| 国产 一区 欧美 日韩| 国产成人福利小说| 成人一区二区视频在线观看| 深夜精品福利| 老女人水多毛片| 久久婷婷人人爽人人干人人爱| 成人二区视频| 欧美日韩在线观看h| 国产日本99.免费观看| 亚洲av成人av| 久久午夜亚洲精品久久| 99国产极品粉嫩在线观看| 国产亚洲91精品色在线| 观看免费一级毛片| 国产精品av视频在线免费观看| 国产精品伦人一区二区| 欧美+亚洲+日韩+国产| 国产真实乱freesex| 欧美三级亚洲精品| 成人无遮挡网站| 免费无遮挡裸体视频| 校园人妻丝袜中文字幕| 中文资源天堂在线| 国产精品三级大全| 国产av一区在线观看免费| 18禁黄网站禁片免费观看直播| 精品无人区乱码1区二区| ponron亚洲| 插阴视频在线观看视频| 国产精品1区2区在线观看.| 亚洲av第一区精品v没综合| 看黄色毛片网站| 成年女人永久免费观看视频| 99热这里只有是精品50| 久久久成人免费电影| 亚洲不卡免费看| 成人国产麻豆网| 内地一区二区视频在线| 久久精品影院6| 欧美+日韩+精品| 日韩高清综合在线| 婷婷精品国产亚洲av在线| 九九爱精品视频在线观看| 亚洲18禁久久av| 欧美日本视频| 性色avwww在线观看| 亚洲激情五月婷婷啪啪| 亚洲精品乱码久久久v下载方式| 国产国拍精品亚洲av在线观看| 97超碰精品成人国产| 免费人成视频x8x8入口观看| 一进一出抽搐gif免费好疼| 99久国产av精品| 日本精品一区二区三区蜜桃| 日日摸夜夜添夜夜添av毛片| 精品久久久久久久末码| 亚洲精品日韩在线中文字幕 | 老熟妇乱子伦视频在线观看| 午夜爱爱视频在线播放| 午夜a级毛片| av专区在线播放| 69人妻影院| 久久精品91蜜桃| 桃色一区二区三区在线观看| 国产精品免费一区二区三区在线| av在线亚洲专区| 午夜免费激情av| 欧美日本亚洲视频在线播放| 国产 一区 欧美 日韩| 亚洲性夜色夜夜综合| 一级毛片我不卡| 男人舔奶头视频| 国内久久婷婷六月综合欲色啪| 丰满乱子伦码专区| 网址你懂的国产日韩在线| 熟女人妻精品中文字幕| 国产精品av视频在线免费观看| 亚洲av中文字字幕乱码综合| 国产精品久久久久久av不卡| 日韩精品青青久久久久久| 国内久久婷婷六月综合欲色啪| 综合色av麻豆| av天堂中文字幕网| 欧美色欧美亚洲另类二区| 九九久久精品国产亚洲av麻豆| 看黄色毛片网站| 一边摸一边抽搐一进一小说| 2021天堂中文幕一二区在线观| 国产又黄又爽又无遮挡在线| 久久综合国产亚洲精品| 色综合色国产| 日韩欧美精品免费久久| 亚洲欧美日韩东京热| 日韩欧美 国产精品| 男人和女人高潮做爰伦理| 国产亚洲精品久久久久久毛片| 国产人妻一区二区三区在| 深夜a级毛片| 亚洲国产精品久久男人天堂| 一级a爱片免费观看的视频| 性欧美人与动物交配| 国产高清激情床上av| 黄色欧美视频在线观看| 国产一区二区在线av高清观看| 在线播放无遮挡| 听说在线观看完整版免费高清| 亚洲精华国产精华液的使用体验 | 无遮挡黄片免费观看| 又爽又黄无遮挡网站| 国产av不卡久久| 亚洲丝袜综合中文字幕| 欧美性猛交黑人性爽| 中文资源天堂在线| 一区二区三区四区激情视频 | 麻豆久久精品国产亚洲av| 国产人妻一区二区三区在| 国产三级在线视频| 99热这里只有精品一区| 97超视频在线观看视频| 国产在线男女| 一级毛片久久久久久久久女| 婷婷亚洲欧美| 99热网站在线观看| 精品久久久久久久末码| 国产熟女欧美一区二区| 麻豆国产av国片精品| 国产高清视频在线播放一区| 麻豆乱淫一区二区| 亚洲国产精品sss在线观看| 夜夜夜夜夜久久久久| 色吧在线观看| 伊人久久精品亚洲午夜| 国产成人aa在线观看| 91麻豆精品激情在线观看国产| 欧美绝顶高潮抽搐喷水| 精品一区二区三区视频在线| 九九久久精品国产亚洲av麻豆| 国产成人一区二区在线| 亚洲欧美中文字幕日韩二区| 国产成人91sexporn| 内射极品少妇av片p| 国产美女午夜福利| 美女黄网站色视频| 伊人久久精品亚洲午夜| 热99re8久久精品国产| 男人舔女人下体高潮全视频| 国产大屁股一区二区在线视频| 91久久精品国产一区二区三区| 午夜免费激情av| 99热精品在线国产| 99久久精品热视频| 亚洲在线观看片| 欧美性猛交╳xxx乱大交人| 免费av不卡在线播放| 国产麻豆成人av免费视频| 男人舔女人下体高潮全视频| av在线老鸭窝| 露出奶头的视频| 少妇熟女aⅴ在线视频|