• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Graph-Based Chinese Word Sense Disambiguation with Multi-Knowledge Integration

    2019-11-07 03:12:32WenpengLuFanqingMengShoujinWangGuoqiangZhangXuZhangAntaiOuyangandXiaodongZhang
    Computers Materials&Continua 2019年10期

    Wenpeng Lu,Fanqing Meng,Shoujin Wang,Guoqiang Zhang,Xu ZhangAntai Ouyang and Xiaodong Zhang

    Abstract:Word sense disambiguation(WSD)is a fundamental but significant task in natural language processing,which directly affects the performance of upper applications.However,WSD is very challenging due to the problem of knowledge bottleneck,i.e.,it is hard to acquire abundant disambiguation knowledge,especially in Chinese.To solve this problem,this paper proposes a graph-based Chinese WSD method with multi-knowledge integration.Particularly,a graph model combining various Chinese and English knowledge resources by word sense mapping is designed.Firstly,the content words in a Chinese ambiguous sentence are extracted and mapped to English words with BabelNet.Then,English word similarity is computed based on English word embeddings and knowledge base.Chinese word similarity is evaluated with Chinese word embedding and HowNet,respectively.The weights of the three kinds of word similarity are optimized with simulated annealing algorithm so as to obtain their overall similarities,which are utilized to construct a disambiguation graph.The graph scoring algorithm evaluates the importance of each word sense node and judge the right senses of the ambiguous words.Extensive experimental results on SemEval dataset show that our proposed WSD method significantly outperforms the baselines.

    Keywords:Word sense disambiguation,graph model,multi-knowledge integration,word similarity.

    1 Introduction

    The ambiguous words are ubiquitous in human languages,which leads to a huge confusion for natural language processing(NLP).Word sense disambiguation is to determine the meaning of a word according to its context,which is a fundamental task in NLP that directly affect the upper applications,e.g.,machine translation,information retrieval,text categorization and automatic summarization[Raganato,Camacho-Collados and Navigli(2017);Lu,Wu,Jian et al.(2018);Xiang,Li,Hao et al.(2018)].

    The existing WSD methods are divided into three categories:supervised,unsupervised and knowledge-based methods.Supervised method trains the classifiers with machine learning on sense-annotated corpus,which are utilized to judge the senses of new instances[Raganato,Bovi and Navigli(2017)].Though supervised method can achieve the best disambiguation performance,its effectiveness depends on the size and quality of the sense annotated corpus.Due to the limitation of annotated corpus,supervised method is hard to be applied on a large-scale WSD task.Unsupervised method distinguishes the categories of word senses according to their context with clustering technology,which can only different sense categories instead of senses and can not annotate each instance with its accurate sense[Panchenko,Ruppert,Faralli et al.(2017)].Knowledge-based method judges the sense of each instance according to its context and various knowledge bases.Though the performance of knowledge-based method is not better than that of supervised one,it can utilize all kinds of existing knowledge bases and can achieve a better coverage[Raganato,Camacho-Collados and Navigli(2017)].Knowledge-based method is the unique method which is available on large-scale WSD tasks and has achieved better performance in SemEval[Moro and Navigli(2015);Navigli and Ponzetto(2012);Raganato,Camacho-Collados and Navigli(2017);Chen,Liu and Sun(2014)].The existing knowledge bases contain abundant semantic relationships,which can form a huge semantic graph and are beneficial to WSD.Graph-based WSD is a representative one of knowledge-based methods,which is the most popular method and has attracted more and more attention in NLP field[Dongsuk,Kwon,Kim et al.(2018);Duque,Stevenson,Martinez-Romo et al.(2018);Meng,Lu,Zhang et al.(2018)].Graph-based WSD constructs the disambiguation graph according to semantic knowledge relationships,whose performance is affected greatly by the size and quality of knowledge resources.Knowledge acquisition bottleneck is the key factor that limits its development,which is more serious in Chinese due to the rareness of Chinese semantic knowledge resources[Lu(2018)].

    The traditional graph-based Chinese WSD method usually utilizes one kind of Chinese knowledge resource,which is extremely troubled with the problem of knowledge bottleneck[Lu,Huang and Wu(2013);Yang and Huang(2012)].Compared with knowledge resources in Chinese,those in English are more mature and abundant.If we can integrate various Chinese and English knowledge resources together,which can complement each other,we can fully exploit all kinds of disambiguation knowledge.This shows the potential to significantly improve the performance of Chinese WSD.

    Apparently,how to integrate the existing Chinese and English knowledge resources is actually highly challenging,as the senses of them are not mapped to each other.Besides,how to evaluate the overall similarities of sense pairs is difficult,as the relative importance of each knowledge resource is unknown for us.Inspired by the significant progress made on representation learning and optimization algorithm in various tasks such as sentence representation[Mikolov,Sutskever,Chen et al.(2013);Subramanian,

    Trischler,Bengio et al.(2018)]and simulated annealing optimization[Mafarja and Mirjalili(2017);Mamano andHayes(2017)],this work integrates the existing English and Chinese knowledge resources and optimizes their weights to construct a knowledge graph so as to disambiguate the ambiguous words in Chinese.The main idea and contributions are as follows:

    ●We propose a novel knowledge integration method,which merge the English and Chinese knowledge resources together by sense definition alignment with the help of sentence representation.The method is flexible,which can integrate various knowledge conveniently.

    ●We propose a simulated annealing algorithm to optimize the weights of various knowledge.With the optimized weights,the semantic relationships between senses are evaluated to construct an overall knowledge graph.

    ●To the best of our knowledge,this is the first work on graph-based Chinese WSD with multiple-knowledge integration.This work maps and integrates a variety of English knowledge resources into Chinese,and optimizes their weights with simulated annealing algorithm to compute similarities of sense pairs.According to the senses and their similarities,an overall knowledge graph is constructed,where the graph scoring algorithm evaluates the importance of the sense nodes to judge the right sense.

    Extensive experiments on SemEval WSD task are conducted to evaluate our proposed method.The result shows that our method substantially defeat the existing methods,with at least 2.4% improvement.

    The rest of this paper is organized as follows:Section 2 discusses the related work and gives a brief summary in WSD.Section 3 details the proposed graph-based Chinese WSD with multi-knowledge integration,where each key module is described.Section 4 provides the empirical results by comparing our method with the baselines.Finally,we conclude this work and provide future work in Section 5.

    2 Related work

    Graph-based WSD methods are inspired by the lexical chain,which refers to a sequence of semantic related words in a given text,that are linked together by lexical semantic relations,e.g.,eat → apple → fruit → banana.Graph-based WSD is the most popular method in knowledge-based WSD,which constructs a knowledge graph with senses as nodes and semantic relations as edges.Based on the structure of knowledge graph,the right sense is selected[Dongsuk,Kwon,Kim et al.(2018)].

    Galley et al.[McKeown and Galley(2003)]have proposed a WSD method based on lexical chain,introduced as follows.Firstly,when constructing the disambiguation graph,all possible sense is added to the graph as nodes,then the words in the ambiguous sentence are processed one by one.If there exists a semantic relationship between the current word and the processed ones,this relationship is added to the graph as an edge,which is assigned a weight according to the type of relationship and the distance.After the graph is constructed,the weights of sense nodes of ambiguous words are summed and the sense with the greatest weight is selected as the right sense.The method achieves 62.1% accuracy on SemCor noun dataset.

    Mihalcea[Mihalcea(2004)]has proposed a WSD method based on PageRank algorithm,which takes all the senses of the words as the nodes and the semantic relationships between the words as the edges,to construct the disambiguation graph.Pagerank algorithm is applied on the graph to evaluate the importance of each sense node to judge the right sense.Agirre et al.propose personalized PageRank for WSD[Agirre and Soroa(2009)],which pays more attention on some words and improves the evaluation of sense importance.

    Navigli et al.[Navigli and Velardi(2005)]propose a structural semantic interconnections(SSI)algorithm for WSD,which creates structural specifications of the possible senses for each word and constructs the grammar rules to describe the interconnection relations.The most suitable sense is selected according to the grammar.SSI achieves the best performance in Senseval-3 and SemEval-2007.

    Yang et al.[Yang and Huang(2012)]propose a graph-based WSD method based on word distance,which strengthens the influence of near words and weakens that of far words when evaluating the importance of sense nodes in the graph.Lu et al.[Lu,Huang and Wu(2014)]propose a graph-based WSD method based on domain knowledge,which integrates domain knowledge into the disambiguation framework and improves multiple graph scoring algorithms.

    Traditional graph-based method try to construct the subgraph of all words in a sentence,which may induce some noise information[Navigli and Lapata(2010)].To avoid the problem,Dongsuk et al.[Dongsuk,Kwon,Kim et al.(2018)]propose a WSD method based on subgraph reconstruction,where context words of an ambiguous word for constructing the subgraph are selected with a word similarity threshold.The word similarity is computed based on an embedding generated by Doc2Vec[Le and Mikolov(2014)],which encodes information of the semantic relational path of words in BabelNet[Navigli and Ponzetto(2012)].

    The above existing graph-based WSD methods construct the disambiguation graph according to some lexical knowledge resources,e.g.,WordNet,BabelNet and HowNet[Miller(1995);Navigli and Ponzetto(2012);Zhendong and Qiang(2006)].Most of them only utilize one kind of knowledge resource.As the limitation of size and quality of the resource,the graph-based methods are confused with knowledge bottleneck.Apparently,the knowledge resources are different and complementary.It is necessary to integrate as many as existing resources to strengthen the ability of WSD systems.Comparing with English,the available Chinese semantic resources are more rare,which makes the problem more critical.How to integrate the existing various semantic resources to improve the performance of Chinese WSD is an important issue that is waiting to be solved.

    3 The proposed WSD method

    In this section,we describe the framework of graph-based WSD method and its key modules in detail.With the framework,for sense pairs of Chinese words,the English knowledge resources are utilized to compute their similarities together with the Chinese resources.The weights of the similarities are optimized with simulated annealing algorithm.The disambiguation graph is constructed with senses as nodes,semantic relations as edges and similarities as their weights,where graph algorithm is utilized to score each sense node and select the right sense.The framework and its key modules are introduced as follows.

    Figure 1:Model architecture of sentence matching

    3.1 Framework of the WSD method

    The framework of our proposed graph-based WSD with multi-knowledge integration is shown in Fig.(1).The content words in a Chinese sentence are extracted and mapped into English words with BabelNet.By the mapping,the resources in English would be available for Chinese words.Then,according to English and Chinese knowledge resources,three kinds of word similarity are computed,whose weights are optimized with simulated annealing algorithm so as to obtain overall similarities to construct the disambiguation graph.The importance score of each sense node in the graph is evaluated to select the right senses of ambiguous words.The detailed framework is described as follows:

    (1)Extract the content words after preprocessing the Chinese ambiguous sentence.

    (2)Map Chinese word senses into English ones[Meng,Lu and Xue(2017)].

    (3)Compute word similarity based on English word embeddings and knowledge bases,e.g.,Wikipedia,BabelNet,Gigaword[Parker,Graff,Kong et al.(2011)].

    (4)Compute word similarity based on Chinese word embeddings trained on Sogou corpus3.

    (5)Compute word similarity based on HowNet[Zhendong and Qiang(2006)].

    (6)Optimize the relative weights of the above three kinds of word similarity with simulated annealing algorithm so as to obtain an overall similarities.

    (7)Take word senses as nodes,semantic relations as edges,overall similarities as weights of edges,to construct the disambiguation knowledge graph.

    (8)Evaluate the importance of each sense node in the graph with graph scoring algorithm to select the right sense.

    As shown in Fig.1.sense mapping module,three modules of word similarity,weight optimization module,graph construction and scoring module are the key components of our proposed methods,which are explained in the following subsections.

    3.2 Word sense mapping

    As the rareness of Chinese semantic knowledge resources,mapping Chinese word sense to English ones and utilizing English resources to compensate the deficiency of Chinese resources is a practicable solution.In order to map the senses in Chinese and English semantic resources,we have proposed a method to map the senses between Chinese and English with BabelNet and English-Chinese dictionary[Meng,Lu and Xue(2017);Navigli and Ponzetto(2012);Ke(2011)].

    For each English sense,BabelNet has provides a detailed definition with several short examples.Beside,the English-Chinese dictionary,i.e.,Collins COBUILD Advanced Learner's English-Chinese Dictionary,has provided the detailed bilingual definitions with bilingual examples.That is,both BabelNet and Collins dictionary have provided English description for each sense,and the latter also provides the corresponding Chinese sense annotation.If an English sense is corresponding with a Chinese sense,the meaning of their English definitions or examples should be similar.This is an important and key clues to find and verify the mapping relations among Chinese and English senses.

    With this in mind,we generate embedding representations for the English definitions and examples.According to their cosine similarities,we find the corresponding relationships among BabelNet and Collins definitions in English.Then,as Collins English-Chinese dictionary provides English and Chinese definition simultaneously,we can further obtain the mapping relations between English and Chinese.The detailed implementation is introduced as follows.

    Firstly,for each Chinese sense,its possible candidate English senses are prepared according to HowNet or Chinese-English dictionary.Secondly,for the candidate English senses,we get their definitions and examples according to BabelNet,and collect the bilingual definitions and example according to an English-Chinese bilingual dictionary.Thirdly,inspired by related work with Word2vec[Mikolov,Sutskever,Chen et al.(2013);Le and Mikolov(2014)],we generate embedding representation for each sentence in definitions and examples.Finally,the cosine similarities among embedding representations are computed to find the corresponding English sense for each Chinese sense.Once the senses are mapped between Chinese and English knowledge resources,we can utilize English semantic resources to assist Chinese WSD tasks,which will provide great convenience for Chinese WSD.Our another paper has described the above procedures carefully,whose F1-measure reaches 75.75%[Meng,Lu and Xue(2017)].

    3.3 Word similarity based on English word embeddings and knowledge base

    After the mapping processing in last subsection,the Chinese and English senses are mapped each other.Then,the English knowledge resources can be utilized to disambiguate the words in Chinese.When the disambiguation graph is constructed,each semantic relation between sense nodes need to be assigned a reasonable weight,which should consider as more as information.In this subsection,the information from English knowledge resources is considered.

    We have realized a method for word similarity computation based on English word embeddings and knowledge base,as described in Meng et al.[Meng,Lu,Zhang et al.(2017)].The method has participated in SemEval-2017 Task 24,i.e.,multilingual and cross-lingual semantic word similarity[Camacho-Collados,Pilehvar,Collier et al.(2017)].In the competition,our method has reached 0.778 on official evaluation measure,which wins the second place on English monolingual word similarity subtask[Meng,Lu,Zhang et al.(2017)].

    In SemEval 2017 Task 25http://alt.qcri.org/semeval2017/task2/,i.e.,multilingual and cross-lingual semantic word similarity,we have proposed a word similarity based on English word embeddings and knowledge base[Meng,Lu,Zhang et al.(2017)],whose performance reaches 0.781,which wins the second place on English monolingual word similarity subtask.Since the competition system has achieved the excellent performance,we integrate it into our proposed WSD framework,and utilize it to compute the word similarity based on English knowledge resources,which are introduced as follows.

    The method is a combination method consisting of two basic modules,which are the similarity based on word embedding and the similarity based on knowledge base,i.e.,BabelNet.For the former,Word2Vec toolkit6https://code.google.com/p/word2vec/is used to train word embedding on English wikipedia corpus[Mikolov,Sutskever,Chen et al.(2013)].With the embeddings of each word pair,their cosine similarity is computed.For the latter,BabelNet7https://babelnet.org/contains a large number of concepts and semantic relations,such as synonymy,hypernymy and meronymy.With BableNet API,we can obtain all of the semantic relations among two words.According to the shortest path,the similarity of word pair is computed.The similarity based on word embedding and the similarity based on knowledge base are linearly weighted accumulated as the overall similarity based on English knowledge resources.Our another paper has introduced the implementation in detail[Meng,Lu,Zhang et al.(2017)].The method is flexible,which can combine more knowledge resources.

    3.4 Word similarity based on Chinese word embeddings

    In the last subsection,with the support of word sense mapping,word similarity based on English word embedding is integrated into our WSD framework.Since we aim at the disambiguation problem in Chinese,word similarity based on Chinese word embeddings is crucial and necessary.

    As Word2Vec has demonstrated a powerful ability in various tasks[Mikolov,Sutskever,Chen et al.(2013)],we continue to utilize it to generate Chinese word embeddings,which is trained on Sogou news corpus8http://www.sogou.com/labs/resource/ca.php.With the Chinese word embeddings,we compute their cosine similarity as the word similarity based on Chinese word embeddings.

    3.5 Word similarity based on HowNet

    For word similarity of Chinese words,besides word embedding method in last subsection,HowNet9http://www.keenage.com/also provides the API interface to compute word similarity[Zhendong and Qiang(2006)].

    HowNet is a common semantic knowledge base,which describes the concepts in Chinese and English and the relationships among concepts and their attributions.There are about 800 lexemes in HowNet,which is the basic and smallest unit of meaning that can not be divided further.All concepts in HowNet are described with the basic lexemes.

    HowNet is widely applied in Chinese NLP field,which provides a convenient API interface,i.e.,Hownet_GET_Concept_Similarity,to compute the semantic similarity between two concepts[Yang and Huang(2012)].The similarity has considered multiple relationships from HowNet,including four kinds of primitive lexeme similarities[Qun and Sujian(2002)],which are computed according to their path distance on HowNet hierarchical structure.

    3.6 Weight optimization with simulated annealing algorithm

    The above three similarity methods compute word similarities with different semantic knowledge resources,which are complementary each other.In order to fully utilize their respective advantages,we propose a weight optimization algorithm with simulated annealing algorithm to automatically decide the weight parameters of the three similarities,which are used to linearly combine them so as to obtain a more reasonable overall similarity.The procedure to optimize the weight parameters is shown in Algorithm 1.The core of simulated annealing algorithm for weight optimization is described as:

    whereresult(x)is the target function,i.e.,the disambiguation accuracy,δis cooling rate,tis the temperature.If the result of new parametersxnewis better than that ofxold,the new parameters would be selected with a probability value of 1.Otherwise,the new parameters would be selected with a probability value of.

    In Algorithm 1,the parametersx,y,zis the weights of three kinds of word similarity,which need to be optimized.Line 1 is initialization operation,which sets the initial temperaturetas 100,the minimal cooling temperaturetminas 0.001,the cooling rateδas 0.98,the maximum iterationskas 100 in the experiments.Line 4-5 select a random double value tox,which affects the value ofz.In the Line 6,getEvalResultis the target function,which returns the disambiguation accuracy,given the parametersx,y,z.Line 7 generates an updated value toxnewfrom the neighbourhood ofx.Line 8-18 decide whether the parameterxis update with the newxnew,as described in Eq.(1).Line 19 changes the value oftwith the cooling rateδ.We obtain the three optimized weight parameters by running Algorithm 1 twice.In the first run,we set the value ofyas 1/3,and get the optimized weightsx,z.Then,we keep the minor one of them as the final weight parameter,and run the algorithm again to get the other two weights.The parameters satisfy thatx+y+z=1,x ≥ 0,y ≥ 0,z ≥ 0.

    After the weight parameters are optimized,the final overall word similarity is decides as:

    wherewsandws'are two senses,simenis the word similarity based on English word embeddings and knowledge base,simvecis the word similarity based on Chinese word embeddings,simhowis the word similarity based on HowNet,their optimized weight parameters arex,y,z,respectively.

    3.7 Disambiguation graph construction

    In order to construct the disambiguation graph,we take word senses as nodes,semantic relationships as edges,the overall word similarities as the weights of edges.

    As we utilizes Chinese and English knowledge resources,the senses are represented with a triple,i.e.,Word(ID,Sword,Enword).IDis the ID of a sense or concept.Swordis the first primitive lexeme of concept definition in HowNet.Enwordis its corresponding description in English,i.e.,the mapping from Chinese to English.With the representation form of triples,we can easily integrate the three kinds of word similarity.For example,“中醫(yī)” has two senses,which can be represented as “中醫(yī)(157329,人,practitioner of Chinese medicine)” and “中醫(yī)(157332,知識,traditional Chinese science)”,whose word similarity with other senses can be computed with Eq.(2).

    3.8 Graph scoring algorithm

    PageRank algorithm is selected to evaluate and score the importance of each sense node in the disambiguation graph.If a sense node connects with more nodes with higher importance,its importance is more higher,which means that the sense is more related with the context words.With the algorithm iterations,the importance of each node is gradually differentiated.The sense node with maximum importance would be selected as the right sense of ambiguous word.The importance of each node is updated with the following equation:

    wherevrefers to a sense node,αindicates the probability to continue the current Markov chain,1-αindicates the probability of randomly selecting another node instead of continuing current Markov chain,N is the total numbers of sense nodes,|out(u)|refers to the out-degree of nodeu,in(v)indicates the set of all nodes that link with the nodev.

    4 Experiments

    4.1 Data sets and evaluation measure

    The benchmark dataset is SemEval task#5,i.e.,multilingual Chinese English lexical sample task[Jin,Wu and Yu(2007)],which consists of 19 nouns and 21 verbs.Both training and test corpus are provides.The detail information of this dataset is shown in Tab.1.Our proposed WSD method is unsupervised,which only utilizes test instances instead of training ones.

    Table 1:Summary of SemEval dataset

    Macro-averagepmaris selected to evaluate the performance of WSD methods,which is defined as:

    whereNis the number of all word-types,miis the number of disambiguated correctly to one specific word-type,niis the number of all test instances of this word-type.

    4.2 Baselines

    Our proposed graph-based WSD with multi-knowledge integration is compared with four different types of representative unsupervised WSD methods,i.e.,TorMd,HowGraph,EnGraph,SogouGraph[Jin,Wu and Yu(2007);Meng(2018)].

    · TorMd:An unsupervised WSD method proposed by the University of Toronto,which wins the first place in this SemEval competition.

    · HowGraph:This is an abridgment version of our proposed method,which only utilizes the word similarity based on HowNet.

    · EnGraph:This also is an abridgment version of our proposed method,which only utilizes the word similarity based on English word embeddings and knowledge base.

    · SogouGraph:This is another abridgment version of our proposed method,which only utilizes the word similarity based on Chinese word embeddings trained on Sogou corpus.

    · MultiGraph:This is the full version of our proposed method,which integrates three kinds of word similarity.In the experiments,for word similarity based on HowNet,English word embeddings and knowledge base,Chinese word embeddings,the optimized weight parameters are 0.028,0.336 and 0.636,respectively.

    4.3 Results and analysis

    4.3.1 Comparison of overall performances

    The performance comparison of all methods is shown in Tab.2.All the graph-based methods,including our proposed method and its three abridgment versions,outperforms TorMd method,which demonstrates that graph-based methods are potential to achieve a significant improvement.

    MultiGraph shows 6.1%,2.4%,4.1% and 3.6% improvement over the TorMd,HowGraph,EnGraph,SogouGraph.The results demonstrate that utilizing single knowledge resource will hurt the performance and show that our proposed WSD framework integrating multiple resources is powerful.In order to achieve a satisfied performance,it is necessary to integrate as many as knowledge resources in graph-based WSD.

    Table 2:Comparison of overall performances.Imp refers to the improvement made by MultiGraph over the corresponding method

    4.4 Comparison of noun performances

    The performance comparison on nouns is shown in Tab.3.MultiGraph achieves the best performance,which demonstrates 4.5%,4.4%,3.5%,4.3% improvement over TorMd,HowGraph,EnGraph and SogouGraph.As shown in Tab.3,HowGraph,EnGraph and SogouGraph have different advantages on different words,while MultiGraph integrates their advantages so as to improve the noun performance greatly.

    Table 3:Comparison of noun performances.For each word,its best performance is boldfaced

    4.5 Comparison of verb performances

    The performance comparison on verbs is shown in Tab.4.MultiGraph still achieves the best performance,which demonstrates 9.4%,0.5%,4.6%,3.0% improvement over TorMd,HowGraph,EnGraph and SogouGraph.The significant improvements demonstrate that the effectiveness of our proposed graph-based WSD method with multiknowledge integration,i.e.,MultiGraph.

    Table 4:Comparison of verb performances

    5 Conclusion

    This work proposes a novel graph-based Chinese WSD method with multi-knowledge integration.Different from the existing knowledge-based methods,our methods utilize Chinese and English semantic knowledge simultaneously to disambiguation words.Three different kinds of word similarity from various knowledge resources are optimized with simulated annealing algorithm,and integrated to compute an overall similarity.With sense as nodes,semantic relations as edges and the overall similarities as weights of edges,the disambiguation graph is constructed,which is evaluated with graph scoring algorithm to select the right senses.Extensive experiments on SemEval dataset shows that the proposed method significantly outperforms four baselines.In this work,we only use three kind of knowledge resources,which only makes a trial to integrate multilanguage knowledge resources.Our future work is to find more semantic resources and design more sophisticated integration methods for graph-based WSD.

    Acknowledgement:The research work is supported by National Key R&D Program of China under Grant No.2018YFC0831704,National Nature Science Foundation of China under Grant No.61502259 and Natural Science Foundation of Shandong Province under Grant No.ZR2017MF056,and Taishan Scholar Program of Shandong Province in China(Directed by Prof.Yinglong Wang).

    91av网站免费观看| 动漫黄色视频在线观看| av免费在线观看网站| 91精品三级在线观看| aaaaa片日本免费| 纯流量卡能插随身wifi吗| 亚洲精品自拍成人| 亚洲专区国产一区二区| 18在线观看网站| 老熟妇乱子伦视频在线观看| 香蕉丝袜av| 欧美 亚洲 国产 日韩一| 成在线人永久免费视频| 午夜91福利影院| 国产av精品麻豆| 女人爽到高潮嗷嗷叫在线视频| 国产精品一区二区在线观看99| 日本av手机在线免费观看| 国产欧美日韩精品亚洲av| av国产精品久久久久影院| 在线天堂中文资源库| 黄色毛片三级朝国网站| 90打野战视频偷拍视频| 99国产精品99久久久久| 日韩三级视频一区二区三区| av国产精品久久久久影院| 人人妻人人澡人人爽人人夜夜| 精品卡一卡二卡四卡免费| www.999成人在线观看| av国产精品久久久久影院| 国产免费现黄频在线看| 王馨瑶露胸无遮挡在线观看| 精品久久久久久久毛片微露脸| 王馨瑶露胸无遮挡在线观看| 大陆偷拍与自拍| 国产在视频线精品| 90打野战视频偷拍视频| 精品卡一卡二卡四卡免费| 免费观看av网站的网址| 亚洲一区中文字幕在线| 1024视频免费在线观看| 日本黄色视频三级网站网址 | 热re99久久精品国产66热6| 欧美人与性动交α欧美精品济南到| 三上悠亚av全集在线观看| aaaaa片日本免费| 男女无遮挡免费网站观看| 男女高潮啪啪啪动态图| 深夜精品福利| 男男h啪啪无遮挡| 狠狠婷婷综合久久久久久88av| 国产精品 国内视频| 成在线人永久免费视频| 国产淫语在线视频| 日本黄色视频三级网站网址 | 欧美精品啪啪一区二区三区| 久久免费观看电影| 纯流量卡能插随身wifi吗| 老司机福利观看| 99re6热这里在线精品视频| 午夜91福利影院| 中亚洲国语对白在线视频| 亚洲精品中文字幕在线视频| 一本色道久久久久久精品综合| 一进一出好大好爽视频| 日韩欧美国产一区二区入口| 国产又色又爽无遮挡免费看| 飞空精品影院首页| 老司机深夜福利视频在线观看| 免费不卡黄色视频| 成人亚洲精品一区在线观看| 大香蕉久久网| 777久久人妻少妇嫩草av网站| 69精品国产乱码久久久| 日韩视频在线欧美| av国产精品久久久久影院| 亚洲欧美一区二区三区黑人| 热re99久久精品国产66热6| 国产精品亚洲av一区麻豆| 亚洲成人国产一区在线观看| 亚洲av成人一区二区三| 丝袜人妻中文字幕| 97在线人人人人妻| 正在播放国产对白刺激| 欧美日韩亚洲综合一区二区三区_| 久热爱精品视频在线9| 日本wwww免费看| 国产精品国产av在线观看| 亚洲色图综合在线观看| 纵有疾风起免费观看全集完整版| 国产高清视频在线播放一区| 国产精品欧美亚洲77777| 成在线人永久免费视频| 精品一区二区三区视频在线观看免费 | www.自偷自拍.com| svipshipincom国产片| 久久人妻福利社区极品人妻图片| 日韩中文字幕欧美一区二区| 一区二区av电影网| 国产真人三级小视频在线观看| 国产日韩一区二区三区精品不卡| 亚洲av日韩精品久久久久久密| 国产精品免费视频内射| 在线观看免费午夜福利视频| 少妇被粗大的猛进出69影院| 王馨瑶露胸无遮挡在线观看| 黄片大片在线免费观看| 国产日韩欧美在线精品| 国产1区2区3区精品| 一边摸一边抽搐一进一小说 | 老熟妇乱子伦视频在线观看| 日本撒尿小便嘘嘘汇集6| 国产一区二区三区综合在线观看| 日本a在线网址| 无人区码免费观看不卡 | 黄色视频不卡| 亚洲精华国产精华精| 欧美激情高清一区二区三区| 日日爽夜夜爽网站| 欧美精品一区二区大全| 午夜91福利影院| 99精国产麻豆久久婷婷| 国产日韩欧美视频二区| 亚洲国产欧美在线一区| 色94色欧美一区二区| 黄网站色视频无遮挡免费观看| 一级片'在线观看视频| 90打野战视频偷拍视频| 飞空精品影院首页| 人人澡人人妻人| 亚洲精华国产精华精| 国产xxxxx性猛交| 亚洲午夜理论影院| 男人操女人黄网站| 国产一卡二卡三卡精品| 黄色视频在线播放观看不卡| 亚洲第一欧美日韩一区二区三区 | 免费观看av网站的网址| av天堂在线播放| 日日摸夜夜添夜夜添小说| 香蕉国产在线看| 他把我摸到了高潮在线观看 | 一本一本久久a久久精品综合妖精| 亚洲va日本ⅴa欧美va伊人久久| 免费不卡黄色视频| 亚洲人成77777在线视频| 国产成人欧美| 男女高潮啪啪啪动态图| 久久久久久久久久久久大奶| 久久国产精品男人的天堂亚洲| 成人精品一区二区免费| 国产熟女午夜一区二区三区| 1024香蕉在线观看| 一级a爱视频在线免费观看| 丰满饥渴人妻一区二区三| 在线观看一区二区三区激情| 嫩草影视91久久| 12—13女人毛片做爰片一| 妹子高潮喷水视频| 午夜两性在线视频| 极品少妇高潮喷水抽搐| 亚洲国产欧美网| 菩萨蛮人人尽说江南好唐韦庄| 久久久国产精品麻豆| 天天躁夜夜躁狠狠躁躁| 丝袜美足系列| av片东京热男人的天堂| 国内毛片毛片毛片毛片毛片| 丁香六月天网| 超碰成人久久| 美女扒开内裤让男人捅视频| 麻豆av在线久日| 亚洲免费av在线视频| 一本—道久久a久久精品蜜桃钙片| 18禁黄网站禁片午夜丰满| 人人妻人人澡人人看| 国产成人一区二区三区免费视频网站| 一进一出抽搐动态| 丰满人妻熟妇乱又伦精品不卡| 久久青草综合色| 9191精品国产免费久久| 精品少妇内射三级| 国产单亲对白刺激| 国产精品久久久久久人妻精品电影 | 男女午夜视频在线观看| 中文字幕av电影在线播放| 日韩一卡2卡3卡4卡2021年| 亚洲九九香蕉| 国产日韩欧美在线精品| 香蕉久久夜色| 国产精品 欧美亚洲| 午夜福利在线观看吧| 国产精品国产av在线观看| 午夜激情av网站| 日韩一卡2卡3卡4卡2021年| 免费观看av网站的网址| 久久精品aⅴ一区二区三区四区| 亚洲熟女毛片儿| 午夜福利视频精品| 国产精品亚洲av一区麻豆| 日本a在线网址| 亚洲第一青青草原| 自线自在国产av| 岛国在线观看网站| 伦理电影免费视频| 欧美人与性动交α欧美精品济南到| 久久国产精品大桥未久av| 欧美av亚洲av综合av国产av| 老司机在亚洲福利影院| 精品国产乱子伦一区二区三区| 精品久久久久久电影网| 18禁美女被吸乳视频| 宅男免费午夜| 飞空精品影院首页| av不卡在线播放| 国产人伦9x9x在线观看| 久久香蕉激情| 狠狠狠狠99中文字幕| 国产精品免费一区二区三区在线 | 狂野欧美激情性xxxx| 超色免费av| 99国产综合亚洲精品| 另类亚洲欧美激情| 99久久人妻综合| 777久久人妻少妇嫩草av网站| 亚洲 欧美一区二区三区| 在线观看免费视频网站a站| 又紧又爽又黄一区二区| xxxhd国产人妻xxx| 18禁裸乳无遮挡动漫免费视频| 亚洲精品自拍成人| 国产在线免费精品| 国产日韩欧美在线精品| 精品国产亚洲在线| 亚洲国产欧美网| 黑人猛操日本美女一级片| 69精品国产乱码久久久| 狠狠婷婷综合久久久久久88av| 亚洲精品一二三| 免费看a级黄色片| 亚洲五月色婷婷综合| 亚洲精品av麻豆狂野| 国产区一区二久久| 欧美日韩成人在线一区二区| 午夜福利免费观看在线| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲五月色婷婷综合| 一区二区三区国产精品乱码| 777米奇影视久久| 超色免费av| 香蕉丝袜av| 深夜精品福利| 在线观看一区二区三区激情| 丝袜在线中文字幕| 9热在线视频观看99| 国产成人精品久久二区二区免费| 成人国语在线视频| 欧美成狂野欧美在线观看| 久久99一区二区三区| 亚洲精品国产精品久久久不卡| 欧美黑人精品巨大| 新久久久久国产一级毛片| 亚洲精品在线美女| 黄色视频在线播放观看不卡| 午夜日韩欧美国产| 久久热在线av| 久久这里只有精品19| 高清毛片免费观看视频网站 | 亚洲精品国产精品久久久不卡| 国产又爽黄色视频| 99国产极品粉嫩在线观看| 999久久久精品免费观看国产| 捣出白浆h1v1| 高清毛片免费观看视频网站 | netflix在线观看网站| 在线观看免费视频日本深夜| 国产精品国产av在线观看| 日本欧美视频一区| 免费黄频网站在线观看国产| 999久久久精品免费观看国产| 一级黄色大片毛片| 超碰成人久久| 一本综合久久免费| netflix在线观看网站| 99国产精品99久久久久| xxxhd国产人妻xxx| 黄色a级毛片大全视频| 国产高清国产精品国产三级| 国产亚洲欧美精品永久| 成年女人毛片免费观看观看9 | 女警被强在线播放| 午夜久久久在线观看| 精品卡一卡二卡四卡免费| 亚洲成人免费电影在线观看| 在线观看一区二区三区激情| 国产亚洲欧美在线一区二区| 女人精品久久久久毛片| 九色亚洲精品在线播放| 欧美日韩av久久| 97人妻天天添夜夜摸| 国产精品一区二区精品视频观看| 欧美精品啪啪一区二区三区| 欧美性长视频在线观看| 午夜福利影视在线免费观看| 99九九在线精品视频| 王馨瑶露胸无遮挡在线观看| 午夜日韩欧美国产| 精品国产国语对白av| 咕卡用的链子| 咕卡用的链子| 香蕉久久夜色| 亚洲自偷自拍图片 自拍| 91九色精品人成在线观看| 99久久99久久久精品蜜桃| 精品高清国产在线一区| 啦啦啦中文免费视频观看日本| 亚洲精品久久午夜乱码| 亚洲精品久久午夜乱码| 亚洲七黄色美女视频| 欧美日韩亚洲综合一区二区三区_| 亚洲av电影在线进入| videosex国产| 国产午夜精品久久久久久| 黄色视频不卡| 欧美激情极品国产一区二区三区| 日韩欧美三级三区| 久久中文看片网| 99精品欧美一区二区三区四区| 日本精品一区二区三区蜜桃| 久久中文看片网| 精品少妇黑人巨大在线播放| 久久中文看片网| 老汉色av国产亚洲站长工具| 国产午夜精品久久久久久| 性高湖久久久久久久久免费观看| 性高湖久久久久久久久免费观看| 日韩熟女老妇一区二区性免费视频| 国产精品.久久久| 91成年电影在线观看| 香蕉丝袜av| 亚洲黑人精品在线| 久久免费观看电影| 在线观看免费日韩欧美大片| 两性夫妻黄色片| 女警被强在线播放| 操美女的视频在线观看| 可以免费在线观看a视频的电影网站| 欧美一级毛片孕妇| 国产精品欧美亚洲77777| 欧美大码av| 久久99热这里只频精品6学生| 国产av国产精品国产| 亚洲欧美一区二区三区久久| 一本综合久久免费| 亚洲一区中文字幕在线| 亚洲五月色婷婷综合| 婷婷成人精品国产| 日韩欧美国产一区二区入口| tube8黄色片| 精品一品国产午夜福利视频| 国产精品国产av在线观看| 十八禁网站免费在线| 露出奶头的视频| 亚洲国产精品一区二区三区在线| 麻豆av在线久日| 99精品久久久久人妻精品| 极品人妻少妇av视频| 狂野欧美激情性xxxx| av一本久久久久| 真人做人爱边吃奶动态| av不卡在线播放| 亚洲成人免费电影在线观看| 久久久久久久国产电影| 国产精品久久久av美女十八| 午夜福利乱码中文字幕| 欧美日本中文国产一区发布| 露出奶头的视频| 一级片免费观看大全| 高清黄色对白视频在线免费看| 高清视频免费观看一区二区| 他把我摸到了高潮在线观看 | 日本av免费视频播放| 波多野结衣一区麻豆| 在线av久久热| 制服人妻中文乱码| 午夜福利视频精品| 美女国产高潮福利片在线看| 一区二区三区激情视频| 黑人巨大精品欧美一区二区蜜桃| 国产在线观看jvid| 欧美激情 高清一区二区三区| 精品久久久久久久毛片微露脸| 99国产精品一区二区蜜桃av | 日韩视频一区二区在线观看| 欧美精品人与动牲交sv欧美| 99精品久久久久人妻精品| 精品午夜福利视频在线观看一区 | 无限看片的www在线观看| 亚洲三区欧美一区| videos熟女内射| 亚洲第一青青草原| 国产有黄有色有爽视频| 欧美亚洲 丝袜 人妻 在线| 久久这里只有精品19| 黄色丝袜av网址大全| 亚洲欧美一区二区三区久久| 国产亚洲av高清不卡| 国产男靠女视频免费网站| 宅男免费午夜| 18禁美女被吸乳视频| 久久99热这里只频精品6学生| 精品国产一区二区三区四区第35| 啪啪无遮挡十八禁网站| 最近最新中文字幕大全电影3 | 黄片小视频在线播放| 91成年电影在线观看| 久久精品亚洲av国产电影网| 自拍欧美九色日韩亚洲蝌蚪91| 精品少妇久久久久久888优播| 美女国产高潮福利片在线看| 亚洲av成人不卡在线观看播放网| 动漫黄色视频在线观看| 搡老岳熟女国产| 欧美日韩中文字幕国产精品一区二区三区 | 一区二区av电影网| 最近最新中文字幕大全电影3 | 91精品三级在线观看| 人人妻人人澡人人看| 老司机深夜福利视频在线观看| 亚洲国产看品久久| 欧美亚洲 丝袜 人妻 在线| 窝窝影院91人妻| 亚洲欧美色中文字幕在线| 久久香蕉激情| 亚洲色图av天堂| 亚洲欧美色中文字幕在线| 亚洲人成77777在线视频| 大香蕉久久成人网| 欧美中文综合在线视频| 一本大道久久a久久精品| 嫩草影视91久久| 久久久水蜜桃国产精品网| 欧美日韩成人在线一区二区| 欧美另类亚洲清纯唯美| 女人爽到高潮嗷嗷叫在线视频| 国产淫语在线视频| 欧美黑人欧美精品刺激| 亚洲黑人精品在线| 狂野欧美激情性xxxx| 精品国产国语对白av| 美女视频免费永久观看网站| av视频免费观看在线观看| 男女免费视频国产| 成人国语在线视频| 亚洲欧美一区二区三区久久| 999久久久国产精品视频| 一二三四社区在线视频社区8| 午夜福利在线观看吧| 亚洲精品国产一区二区精华液| 久久精品亚洲精品国产色婷小说| av线在线观看网站| 国产亚洲欧美在线一区二区| 日韩大码丰满熟妇| 久久久精品94久久精品| 欧美激情久久久久久爽电影 | 成人永久免费在线观看视频 | 一级,二级,三级黄色视频| av不卡在线播放| 91av网站免费观看| 久久精品亚洲精品国产色婷小说| 免费高清在线观看日韩| 丝袜美足系列| 老熟女久久久| 亚洲av电影在线进入| 久久久精品国产亚洲av高清涩受| 国精品久久久久久国模美| 欧美 日韩 精品 国产| 久久久久精品人妻al黑| 国产欧美日韩一区二区精品| 丰满迷人的少妇在线观看| 日韩一区二区三区影片| 国产一卡二卡三卡精品| a在线观看视频网站| 丰满迷人的少妇在线观看| 精品国产超薄肉色丝袜足j| 深夜精品福利| 99香蕉大伊视频| 成人精品一区二区免费| 亚洲专区中文字幕在线| 岛国毛片在线播放| 黑丝袜美女国产一区| 成人国语在线视频| 欧美激情久久久久久爽电影 | 久久精品亚洲熟妇少妇任你| 国产xxxxx性猛交| 日韩人妻精品一区2区三区| 777米奇影视久久| 精品福利观看| 久久av网站| 久久午夜亚洲精品久久| 啦啦啦在线免费观看视频4| 99国产极品粉嫩在线观看| 少妇裸体淫交视频免费看高清 | 午夜精品国产一区二区电影| 亚洲精品国产精品久久久不卡| 亚洲国产av影院在线观看| av电影中文网址| 啦啦啦免费观看视频1| 亚洲第一欧美日韩一区二区三区 | 久久久久久免费高清国产稀缺| 久久久水蜜桃国产精品网| 一区二区av电影网| 无限看片的www在线观看| h视频一区二区三区| 国产精品偷伦视频观看了| 脱女人内裤的视频| 99国产极品粉嫩在线观看| 亚洲七黄色美女视频| 久久午夜综合久久蜜桃| 亚洲国产欧美日韩在线播放| 午夜福利免费观看在线| 777米奇影视久久| 国产亚洲av高清不卡| 成年人免费黄色播放视频| 夜夜夜夜夜久久久久| 9热在线视频观看99| 欧美黑人欧美精品刺激| 精品一区二区三区四区五区乱码| 夜夜骑夜夜射夜夜干| 国产不卡av网站在线观看| 69精品国产乱码久久久| 一本久久精品| kizo精华| 无人区码免费观看不卡 | 一二三四社区在线视频社区8| aaaaa片日本免费| 777久久人妻少妇嫩草av网站| 久久久久精品人妻al黑| 久久久久久亚洲精品国产蜜桃av| 在线观看66精品国产| 啦啦啦 在线观看视频| 久久久久视频综合| 丰满饥渴人妻一区二区三| 亚洲av电影在线进入| 法律面前人人平等表现在哪些方面| 精品国产超薄肉色丝袜足j| 日本av免费视频播放| 美女视频免费永久观看网站| 久久这里只有精品19| 日韩制服丝袜自拍偷拍| 男人操女人黄网站| 久久久精品区二区三区| 久久天躁狠狠躁夜夜2o2o| 超碰成人久久| 精品国产一区二区三区久久久樱花| 国产1区2区3区精品| 99精品欧美一区二区三区四区| 久久99热这里只频精品6学生| 欧美在线黄色| 视频区图区小说| 麻豆成人av在线观看| 黄色视频,在线免费观看| 91老司机精品| 国产精品香港三级国产av潘金莲| 麻豆国产av国片精品| 一边摸一边抽搐一进一出视频| 国产午夜精品久久久久久| 男女高潮啪啪啪动态图| av天堂在线播放| 男人操女人黄网站| 老司机在亚洲福利影院| 国产精品自产拍在线观看55亚洲 | 女警被强在线播放| 亚洲五月色婷婷综合| 少妇裸体淫交视频免费看高清 | 青草久久国产| 午夜两性在线视频| 欧美日韩成人在线一区二区| 麻豆成人av在线观看| 亚洲一区二区三区欧美精品| 女性被躁到高潮视频| 亚洲av日韩在线播放| 超碰成人久久| 亚洲国产精品一区二区三区在线| 亚洲自偷自拍图片 自拍| a级毛片在线看网站| 美国免费a级毛片| 黄色视频不卡| 50天的宝宝边吃奶边哭怎么回事| 国产主播在线观看一区二区| aaaaa片日本免费| 午夜精品国产一区二区电影| a在线观看视频网站| 手机成人av网站| 天堂中文最新版在线下载| 免费一级毛片在线播放高清视频 | 精品一区二区三区视频在线观看免费 | 亚洲美女黄片视频| 99香蕉大伊视频| 激情视频va一区二区三区| 人成视频在线观看免费观看| 搡老熟女国产l中国老女人| 欧美日本中文国产一区发布| 搡老乐熟女国产| 国产高清激情床上av| 性少妇av在线| 老熟妇仑乱视频hdxx| 亚洲色图 男人天堂 中文字幕| 在线观看66精品国产| 欧美精品人与动牲交sv欧美| 夜夜骑夜夜射夜夜干| 黑丝袜美女国产一区| 久久国产亚洲av麻豆专区| 精品人妻熟女毛片av久久网站| 欧美大码av|