• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automatic Persian Text Summarization Using Linguistic Features from Text Structure Analysis

    2021-12-15 07:06:22EbrahimHeidaryHamParvSamadNejatianKaramollahBagherifardandVahidehRezaie
    Computers Materials&Continua 2021年12期

    Ebrahim Heidary,Ham?d Parv?n,Samad Nejatian,Karamollah Bagherifard,6 and Vahideh Rezaie

    1Department of Computer Engineering,Yasooj Branch,Islamic Azad University,Yasooj,Iran

    2Institute of Research and Development,Duy Tan University,Da Nang,550000,Vietnam

    3Faculty of Information Technology,Duy Tan University,Da Nang,550000,Vietnam

    4Department of Computer Science,Nourabad Mamasani Branch,Islamic Azad University,Mamasani,Iran

    5Department of Electrical Engineering,Yasooj Branch,Islamic Azad University,Yasooj,Iran

    6Young Researchers and Elite Club,Yasooj Branch,Islamic Azad University,Yasooj,Iran

    7Department of Mathematics,Yasooj Branch,Islamic Azad University,Yasooj,Iran

    Abstract:With the remarkable growth of textual data sources in recent years,easy, fast, and accurate text processing has become a challenge with significant payoffs.Automatic text summarization is the process of compressing text documents into shorter summaries for easier review of its core contents,which must be done without losing important features and information.This paper introduces a new hybrid method for extractive text summarization with feature selection based on text structure.The major advantage of the proposed summarization method over previous systems is the modeling of text structure and relationship between entities in the input text,which improves the sentence feature selection process and leads to the generation of unambiguous,concise,consistent,and coherent summaries.The paper also presents the results of the evaluation of the proposed method based on precision and recall criteria.It is shown that the method produces summaries consisting of chains of sentences with the aforementioned characteristics from the original text.

    Keywords: Natural language processing; extractive summarization; linguistic feature; text structure analysis

    1 Introduction

    With the massive volume of digital text data generated every day, fast and accurate retrieval of valuable information from texts has become an increasingly worthwhile challenge.One approach to tackling this challenge is automatic text summarization.

    Text summarization is the process of revising a text to make it shorter than the original document while retaining important words and sentences and without losing its main content and information [1].Automatic text summarization means using machine or computer-based tools to produce a useful summary.Although the first automatic text summarization solutions were introduced in the 1950s [2,3], summarization has long been and still is one of the main challenges of natural language processing.Computer-generated summaries are often different from those produced by humans, because it is very difficult for machines to gain a deep understanding of the content of a text based on its syntactic and semantic structure as humans do [4].

    Summarization systems can be classified based on the type of input, output, purpose,language, and method of summarization (See Fig.1).In terms of the type of input document, summarization systems are divided into two groups:single-document and multi-document.Depending on whether the input is simple text, news article, science article, etc., the purpose of a summarization system could be to produce up-to-date information, run queries, or inform users about a particular subject.The purpose of summarization is often an important determinant of the method of summarization [5].Summarization methods can be classified into two categories:extractive and abstractive.

    Figure 1:Classification of summarizers [6]

    Extractive summarization involves selecting a set of sentences or phrases in the text based on the scores they earn according to a given criterion and copying them into the summary without any change.Abstractive summarization means producing a brief interpretation of the original text.In this method, sentences of the summary may not necessarily be written in the same way as in the original text.Summarization systems can also be classified based on whether they are built to produce educational or informative outputs.

    The majority of existing automatic summarization systems use the extractive summarization method.Extractive summarization can be done with three approaches:statistical approach,linguistic approach, and combined approach [4]:

    Statistical approach:In this approach, summarization depends on the statistical distribution of features of interest and quantitative characteristics of the text.This approach involves using information retrieval and classification techniques without trying to understand the whole document.In this method, an information retrieval method analyzes the position, length, and frequency of words and sentences in the document and a classifier determines which sentences could be part of the summary based on a set of instances on which it is trained.In this method, the sentences of the original text are extracted without attention to the meaning of the words.

    Linguistic approach:In this approach, the computer needs to have a deep knowledge of the language it is processing to the extent that it can analyze and interpret sentences and then decide which phrases should be included in the summary.In this method, the relationships between words and phrases in the document are identified through part-of-speech tagging, grammatical analysis, lexical analysis, and extraction of meaningful phrases.The parameters of these processes could be sign words, features, nouns, and verbs.While the statistical approach tends to be more computationally efficient, the linguistic approach often produces better summaries as it factors in the semantic relationships in the original text.

    Combined approach:This approach involves using a combination of both statistical and linguistic methods to generate more concise and meaningful summaries.

    Statistical summarization methods only use statistical features, which while making them quite simple and agile, also makes them more susceptible to incoherence and inconsistency in generated summaries.

    The combined use of multiple extractive summarization techniques can indeed be very effective in improving the quality of produced summaries.In this study, the combined approach to summarization is used to produce unambiguous, concise, consistent, and coherent summaries based on the linguistic features taken from the text structure analysis, modeling of the text structure and the relationship between its entities, and an improved single-document feature selection process.

    In the rest of this article, the second section provides a review of text summarization methods and systems developed for Persian and other languages, the third section describes the proposed method, the fourth section presents and discusses the results of implementing the method, and the fifth section presents the conclusions and offers a few suggestions for future works.

    2 Review of Literature

    The concept of automatic text summarization was first introduced by Luhn in 1958, in the sense of determining the distribution of words in sentences and identifying the keywords of a document.Since then, a variety of summarization methods have been developed based on different approaches and for different purposes.However, most of these methods can be described as improvements upon previous techniques.

    2.1 Summarization Methods for Persian Texts

    In [2], researchers have proposed a method for improving the quality of automatic singledocument Persian text summarization.In this method, first, a combination of natural language processing methods and correlation graphs is used to compute an importance factor for each grammatical unit in the entire text with the help of the Term Frequency-Inverse Document Frequency (TF-IDF) method.Then, a sentence feature vector is constructed based on four features:the degree of similarity of each sentence to the title, the degree of similarity of each sentence to keywords, the degree of similarity of two sentences with each other, and the position of each sentence.Finally, the constructed similarity graphs are used to identify and select the sentences with the highest external similarity and the lowest internal similarity for inclusion in the summary text.

    The automatic Persian text summarization system presented in [6] has introduced a new method for summarizing Persian news texts using the knowledge contained in FarsiNet.In this method, sentences are clustered based on how similar or related they are into three categories of similar, related, and co-occurring.The sentences to be included in the summary are then selected accordingly to reach the lowest possible redundancy and the highest possible relevance.In this method, the use of co-occurring clusters reduces the ambiguity of the summary text.This method scores the sentences based on eight features:sentence length, paragraph position,demonstrative pronouns, title words, keywords, word weight, specific nouns, and general nouns.In [7], researchers at the Ferdowsi University of Mashhad introduced the Ijaz system for singledocument summarization of Persian news texts.This system uses a set of factors including the degree of similarity with the context, stop words, sentence length, sentence position, important phrases, pronouns, demonstrative pronouns and determining phrases in sentences, marked phrases,and similarity to title to rate the sentences in terms of importance.It then uses the linear combination of these factors to compute a final score for each sentence.The PSO-based extractive Persian text summarization method of [8] extracts all sentences of the input document and identifies the candidate words of each sentence and then generates and stores a context vector for each candidate.Using this vector, it quantifies the similarity of each two sentences and stores it in a similarity matrix.Finally, it uses a clustering algorithm to classify the sentences and picks the most important sentence within each class based on the calculated scores.In [9], a neural network technique was used to investigate the parameters that may influence the performance of summarization systems.

    In the summarization method of this study, first, the paragraphs are scored and then a neural network is used to compute sentence-based scores only for those paragraphs that score higher than a certain threshold level.The neural network used in this study is a three-layered feedforward neural network with 9 input nodes, 6 hidden nodes, and 1 output node.The output of this network determines whether or not a sentence should be reflected in the summary text.According to the author, this approach reduces the processing volume, increases the speed of summary generation, and improves the performance of existing systems.In [10], an automatic Persian text summarization system has been developed based on Latent Semantic Analysis (LSA).Using a rich vocabulary, this system performs stemming with higher precision and higher efficiency than similar systems.Thanks to this rich vocabulary, the LSA process of this system can properly determine the set of synonyms and lexical chains and the semantic links between words and sentences.In [11], researchers have developed an extractive Persian text summarization method based on an anthropological approach.Inspired by the human way of thinking, this method involves creating a summary by building a chain of sentences that are more correlated and related to each other, rather than just the most important sentences.In this method, the input text is divided into paragraphs and the text of each paragraph is divided into a set of sentences.Once the relationships between all sentences in each paragraph are determined, the system attempts to find a chain of sentences with the strongest connection to each other.The resulting chain will be the summary of the original text.The FarsiSum automatic Persian text summarizer system introduced in [12] is a modified version of the SweSum summarizer for Swedish [13].This system receives the input texts in HTML format.In the graph-based Persian text summarization algorithm of [14],the graph theory is used to choose which sentences of the input document should be a candidate for inclusion in the summary.In this algorithm, the nodes and edges of a graph constructed for the text are weighted based on different criteria and then the final weight of each sentence is determined by combining these values.The final weight reflects the importance of the sentence and the likelihood that it will appear in the final summary.The FarsiSum summarizer is a statistical method and gives higher scores to first sentences.The Persian text summarizer proposed in [15]is an extractive summarization method operating based on the graph theory and lexical chains.

    2.2 Summarization Methods for Other Documents

    In [16], the vector space model has been used to develop an abstractive automatic summarization system for online debate texts.This system consists of three modules:point extraction,point curation, and summary generation.Point extraction is done by dependency parsing and analyzing syntactic structure.After selecting the topic points and the points that could be suitable for the summary, shorter points are generated by smaller indirect points.In [17], an extractive summarization method has been developed for Arabic texts.This method uses a combination of semantic information extracted from the Arabic word net and rhetorical structure theory(RST), which is one of the most widely used theories in natural language processing.In this method, a combination of linguistic selection methods and sentence feature selection methods is used to improve the quality of Arabic text summarization.The proposed RST-based method first generates an initial summary and then uses the score of each sentence in this summary to investigate the similarity of sentences with the main title and subheadings.The automatic Indonesian text summarization system of [18] uses a combination of sentence scoring and decision tree for summary generation.In this system, the C4.5 algorithm is used to select the sentences of interest.Then, a sentence scoring method is used to weight each sentence based on 8 features,including TF-IDF, uppercase letters, proper nouns, cue phrases, numerical data, sentence length,sentence position, and similarity to title.Next, a decision tree model is generated based on the training data, and finally, the resulting rules are used to determine important sentences and generate the summary accordingly.In [4], an extractive summarization method based on a combined statistical-linguistic approach has been proposed for Indian texts.

    This summarization system consists of three main stages:preprocessing, sentence feature extraction, and genetic algorithm (GA) for ranking sentences based on optimized feature weights.Each sentence is represented by a sentence feature vector.For each sentence, the statisticallinguistic features are examined and a score is produced based on the weight of features in that sentence.The results are then used to rank the sentences.Sentence features can take values between zero and one.In the GA, the fittest chromosome is selected after a certain number of generations, and then the distance between each sentence score and the fittest chromosome is measured by the Euclidean distance formula.Sentences are then sorted in ascending order of this distance.Finally, a summary is produced by extracting a certain number of highest ranked sentences from the document depending on the intended degree of summarization.The extractive summarization method proposed in [19] uses a Hidden Markov Model (HMM) part-of-speech tagging technique for summary generation.Part-of-Speech (POS) tagging is an automatic machine learning process for tagging each word in a sentence based on verbs, nouns, adjectives, and other typical components of natural languages.In the method of [20], summarization is performed by feature selection based on GA and a probabilistic technique.This method considers five features:similarity to title, sentence length, sentence position, numbers, and pseudowords.Depending on the number of features used, each chromosome is made of up to five genes, each representing a binary format feature.The system proposed in [21,22], which is called QUESTS, is an integrated query system for generating extractive summaries from a set of documents.This system draws an integrated graph of the relationships between the sentences of all input documents and then uses the found relationships to derive multiple subgraphs from the main graph.These subgraphs consist of sentences that are higher related to the query and to each other.The system then ranks the subgraphs based on a scoring model and selects the highest-ranked subgraph that is most relevant to the query for inclusion in the summary.

    3 Proposed Method

    This paper presents a new combined extractive single-document text summarization method based on text structure.The proposed summarization process consists of three stages:preprocessing, feature selection, and summary generation.The architecture of the proposed method is described in the following Fig.2.

    Figure 2:Architecture of the proposed summarization method

    The idea used in the proposed method and the improvement made in the feature selection stage based on the text structure greatly reduce issues such as inconsistency, ambiguity, and redundancy in the summary text.

    3.1 Preprocessing

    Before starting the summarization process, it is necessary to convert the original text into a single form that is suitable for the summarization operation.The pre-processing stage generally consists of normalization, tokenization, POS tagging, stemming/lemmatization, and stop word removal, and is done more or less the same way in all languages.In this study, pre-processing is performed with the help of the ParsiPardaz tool, which has been developed by the Iranian National Cyberspace Research Institute (Telecommunication Research Center of Iran) for the Persian language.This tool has shown to have 98% accuracy in tagging and 100% accuracy in normalization [11].

    3.1.1 Normalization

    The first step in pre-processing is to standardize the input text.One of the common problems in Persian texts is the variety of ways that some words and letters can be written.For example,all words that containorshould be changed into a single format.This normalization is done at the start of the pre-processing stage to avoid problems in the subsequent stages.

    3.1.2 POS Tagging

    After the normalization stage, the role of words (e.g.noun, verb, adjective, conjunction, etc.)in sentences should be determined for use in the next stages.

    3.1.3 Tokenization

    This step involves identifying and separating the words and sentences of the input text based on the characters that signify the end of a word or sentence.In this step, conjunctions are also treated as the boundaries of sentences.

    3.1.4 Stop Word Removal

    Stop words are the words that are commonly used in texts but are not descriptive, do not depend on the subject, and also do not have a semantic role.Examples include conjunctions,linking verbs, pronouns, adpositions, and adverbs.

    3.1.5 Stemming

    After removing the stop words, the stem of the words that have remained in the text must be determined.

    4 Feature Selection

    In this stage, the input text undergoes a feature analysis.Since the most important words in the text that arrives at this stage are nouns and adjectives, here, the main features of the text are weighted based on the features identified for important terms, sentences, and paragraphs.In the feature selection stage, the main features of the input document are scored in the following three phases.

    TBF:Term-based Features

    SBF:Sentence-based Features

    PBF:Paragraph-based Features

    WFS:Weight Features Selection Sentence

    4.1 Term-Based Features

    Each sentence in the text consists of a number of terms that play an important role in the meaning of that sentence.Each of these terms has a number of features that can affect the importance of that sentence.

    4.1.1 TS-ISF

    This parameter indicates the importance of a term in a sentence.TF-ISF of each term is calculated as follows Eqs.(1) and (2):

    In the above equation, N is the total number of sentences in the input document andnwis the number of sentences that contain the word w.Accordingly, the importance of the words of the sentence s is calculated as follows Eq.(3):

    In this equation, Ns is the number of words in the sentence s.

    4.1.2 Positive and Negative Terms

    Here, positive terms refer to terms and phrases like “therefore,” “concluded” “in conclusion,”“in the end,” “found,” etc.the presence of which signifies the importance of the sentence.In contrast, negative terms refer to terms and phrases like “stated,” “in other words,” “as an example,”etc.that are indicative of the insignificance of the sentence.Based on this definition, sentences are scored as follows Eq.(4):

    4.1.3 Similarity to Title Words and Keywords

    If a sentence contains a word of the title or a keyword, it is more likely to be more important than other sentences.Keywords are usually specified by the user.Therefore, the effect of similarity to title words and keywords is calculated by the following Eq.(5):

    4.1.4 Numerical Values

    Numerical values within sentences usually contain important information.Thus, the effect of this feature in the sentence is quantified by the following Eq.(6):

    4.1.5 Emphasized Words

    Sometimes, one or more words in a sentence are emphasized by the use of quotation marks,italic, bold or underlined formatting, etc., which indicate the importance of that word and therefore the importance of that sentence.The effect of such words on the importance of the sentence is determined by the following Eq.(7):

    4.1.6 Stop Words and Insignificant Words

    Some sentences contain stop words that play no significant semantic role in the sentence.In sentences where such words are used excessively, it may indicate the lower importance of the sentence.See Eq.(8):

    4.1.7 Proper Nouns

    Proper names include the names of people, places, and times.The presence of a proper noun in a sentence makes it more important.Therefore, the effect of these nouns is determined based on the following Eq.(9):

    4.1.8 Marked Phrases

    In some sentences, certain parts are marked with quotation marks or other characters (e.g.,{}, [] “”, ??) to highlight the importance of that phrase [15].The effect of these marked sections in a sentence is quantified by the following Eq.(10):

    4.2 Sentence-Based Features

    Besides term-based features, sentence-based features are the other key parameters of extractive summarization.The purpose of these features is to ensure the proper selection of sentences with high importance rating.Each sentence in the input document has a rank that is determined based on the weight of its features.The most important features that are considered for each sentence are described below.

    4.2.1 Sentence Length

    Excessively short or long sentences are usually not suitable for a summary.This is because excessively short sentences tend to convey little meaning and very long sentences make the summary longer than required.Thus, the following Eq.(11) is used to take this into consideration:

    The denominator of this fraction is the number of words in the longest sentence of the document.

    4.2.2 Sentence Position

    An important feature of a sentence is its position in the paragraph.In paragraphs made of declarative sentences, usually, the beginning and end sentences are more important than others,and should therefore be given a higher weight.The importance of sentences, as indicated by their position in the paragraph, is quantified as follows Eq.(12):

    In the above equation,Positonsis the position of the sentence in the paragraph, andTotalSpis the total number of sentences in the paragraph.

    4.2.3 Similarity to Beginning and End Sentences of the Paragraph

    Given the high importance of the beginning and end sentences of each paragraph, another feature that can benefit the selection of high-value sentences for a summary is the degree of similarity to these sentences.See Eq.(13):

    4.2.4 Sentence Centrality

    Sentence centrality is the degree to which the keywords of a sentence overlap with other sentences in the paragraph.The greater the overlap is, the more important and valuable that sentence will be for the text.See Eq.(14):

    In the above equation, the numerator of the fraction is the number of keywords in the sentence that are also present elsewhere in the paragraph, and the denominator is the total number of keywords in the paragraph.

    4.3 Paragraph-Based Features

    In addition to term-based and sentence-based features, the feature of paragraphs that consist of sequences of sentences can also benefit the selection of sentences for the summary.The parameters used for this purpose are described in the following.

    4.3.1 Paragraph Position

    In most texts, including news documents (depending on the writing style), the beginning and end paragraphs tend to convey more important information.Thus:

    In this Eq.(15),PositionPis the position of the paragraph, and TotalPDis the total number of paragraphs in the input document.

    4.3.2 Paragraph Centrality

    In a typical document, some paragraphs have more references to the main topic and carry more information about the subject.Therefore, using this feature can improve the quality of the texts to be selected for summary generation.

    In this Eq.(16), the numerator is the number of keywords in the paragraph that are also present elsewhere in the text, and the denominator is the total number of keywords in the text.

    4.3.3 Similarity to Title and Keywords

    A paragraph that contains a large number of title words and keywords often conveys critical information about the document and can benefit the summarization.

    The Eq.(17) represents the impact of the similarity of the content of the paragraph to keywords and title words on the summary document.

    4.3.4 Important Signs

    In some documents, the paragraphs that are supposed to grab the reader’s attention are marked by bullet points, numbered lists, and multilevel lists.Therefore, the presence of these signs can indicate importance.See Eq.(18):

    5 Summarization

    5.1 Sentence Weighting and Initial Summary Chain Generation

    The previous section described the parameters used in summarization in the three main groups of term-based features, sentence-based features, and paragraph-based features.These features are key determinants of whether a sentence will be included in the summary text.The next step is to determine the weight of each sentence based on a combination of these parameters.This weight indicates the importance of each sentence for the text.The weight of sentence i from paragraph j is defined based on the linear combination of the described criteria as shown below Eq.(19):

    After calculating the weight of every sentence in the input document, the sentences are sorted in descending order of their weight.Then, an initial chain of sentences is produced by taking a certain number of sentences from the top of this list depending on the desired degree of summarization (compression).

    5.2 Final Summary Chain Generation

    An optimal summary is one that is concise and unambiguous and consists of key sentences.To reach such a summary, the initial summary produced in the previous step is refined through two processes:redundancy elimination and ambiguity elimination.These processes are described below.

    5.2.1 Redundancy Elimination

    To avoid including similar sentences in the summary, every two sentences of the summary text are compared with each other.Upon finding two sentences with over 75% overlap, the loner sentence will be removed from the summary text.

    5.2.2 Ambiguity Elimination

    The presence of pronouns with unclear antecedents can make the sentences misleading or ambiguous, and thus make the summary difficult to comprehend.Also, the antecedents of a pronoun can be in a previous sentence that is not included in the summary.To avoid this problem,the sentence before the selected ambiguous sentence in the original text is also used in the summary text.

    6 Implementation and Results

    The proposed method was implemented on 5000 texts taken from news websites, the topics of which are listed in Tab.1.The results were then evaluated using standard criteria.

    Table 1:Specifications of the selected news articles

    The preprocessing operation starts with normalization, which is followed by POS tagging to determine the role of words, and then tokenization to determine the boundaries of words and sentences.The other two operations of the preprocessing stage are the stop word removal and stemming.The tool used for preprocessing in this study was the ParsiPardaz tool developed by the Iranian National Cyberspace Research Institute (Telecommunication Research Center of Iran).One of the great features of this tool is its high accuracy in tagging the words of input documents.

    Noun and adjective are the most important words in the input text.After the preprocessing operation, the method described in the previous section was used to build the initial summary based on term-based, sentence-based, and paragraph-based features.Finally, ambiguity and redundancy elimination was performed to turn this initial summary into the final summary text.

    The summarization performance was measured by precision and recall, which are among the criteria most commonly used for this purpose.These criteria are defined as follows Eqs.(20)and (21):

    In the above equations,Sumris the total number of sentences in human-generated summaries andSumsis the total number of sentences in the summaries produced by the proposed summarization system.For this evaluation, a combined recall-precision measure was also calculated as follows Eq.(22):

    The results of the evaluation of the proposed method based on the above criteria are presented in Tab.2.

    Table 2:Comparison of the proposed method with existing methods

    As the results of the above table show, the proposed summarization method is more accurate than the SweSum method for similar data sets (See Figs.3 and 4).Another major advantage of the proposed summary method is the elimination of redundancy by removing similar sentences from the summary and also the elimination of ambiguity by inserting the sentences on which ambiguous sentences are dependent in the summary text.

    Figure 3:Comparison of the proposed method with existing methods for 30% compression

    Figure 4:Comparison of the proposed method with existing methods for 40% compression

    Another evaluation criterion was the average number of sentences that were present in humangenerated summaries as well as those produced by the proposed method.Fig.5 shows the results of this evaluation, which was conducted with the help of five experts.

    Figure 5:Average number of sentences shared between the summaries of the proposed method and those produced by five experts

    Evaluation of the length of the produced summaries showed that the shortest document consisted of 10 sentences with 70 words and its summary contained 5 sentences with 39 words.For the longest document, which consisted of 50 sentences with 200 words, the summary contained 41 sentences with 147 words.

    To evaluate the performance of the proposed method, readers’satisfaction with the summaries produced for different topics was also investigated.The results of this investigation are presented in Fig.6.

    Figure 6:Readers’satisfaction with the summaries produced by the proposed method

    This evaluation showed that on average, 89% of readers were satisfied with the summaries produced by the proposed method, which indicates the effectiveness and good accuracy of the proposed method in producing the summaries desired by users.

    7 Conclusion

    One of the main advantages of the summarization method proposed in this paper over similar methods is the combined use of statistical and linguistic methods to model the text structure and the examination of the relationship between entities of the input document in order to improve the sentence feature selection process and produce an unambiguous, concise, consistent,and coherent summary.

    The proposed summarization method consists of three stages:preprocessing, feature selection,and summary generation.In the feature selection stage of this method, the main features of the input document are captured in three phases based on term-based features, sentence-based features, and paragraph-based features.

    Comparing the performance of the proposed method with a similar method on a series of news texts showed that with 78.5% precision and 80%, recall, the proposed method outperformed the other method in this respect.Also, readers’satisfaction with the summaries produced by the method was 89%.The idea used in the proposed method fairly reduces issues such as incoherence,ambiguity, and redundancy in the summary text.To improve the proposed summarization method in future studies, it may be possible to use ontology techniques to create stronger lexical chains and semantic links between different entities in the input document.

    Funding Statement:The author(s) received no specific funding for this study..

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    又紧又爽又黄一区二区| 国产成人一区二区三区免费视频网站| 校园春色视频在线观看| 在线十欧美十亚洲十日本专区| 99国产极品粉嫩在线观看| 精品乱码久久久久久99久播| 青草久久国产| 丰满迷人的少妇在线观看| 久久精品国产综合久久久| 超碰97精品在线观看| 丰满的人妻完整版| 啪啪无遮挡十八禁网站| svipshipincom国产片| 国产亚洲精品一区二区www| 黄色 视频免费看| 另类亚洲欧美激情| 人成视频在线观看免费观看| 亚洲精品国产色婷婷电影| 美女国产高潮福利片在线看| 男女做爰动态图高潮gif福利片 | 日本免费a在线| 真人一进一出gif抽搐免费| 免费av中文字幕在线| 国产精品国产高清国产av| 无遮挡黄片免费观看| 露出奶头的视频| 亚洲人成电影免费在线| 欧美久久黑人一区二区| 黄色 视频免费看| www.999成人在线观看| 午夜免费激情av| 久久久久精品国产欧美久久久| 激情在线观看视频在线高清| 久久精品国产清高在天天线| 久久亚洲真实| 搡老乐熟女国产| 一二三四在线观看免费中文在| 午夜a级毛片| 在线看a的网站| 国产单亲对白刺激| 老司机深夜福利视频在线观看| 亚洲自拍偷在线| 成在线人永久免费视频| 自拍欧美九色日韩亚洲蝌蚪91| 国产精品久久久久成人av| 新久久久久国产一级毛片| 嫩草影院精品99| x7x7x7水蜜桃| netflix在线观看网站| 日韩欧美国产一区二区入口| 久久久久久大精品| 欧美丝袜亚洲另类 | 村上凉子中文字幕在线| 99国产精品一区二区蜜桃av| 老司机午夜福利在线观看视频| 亚洲专区字幕在线| 国产精品影院久久| 中文字幕人妻丝袜制服| 欧美日韩亚洲国产一区二区在线观看| 色老头精品视频在线观看| 日韩 欧美 亚洲 中文字幕| 亚洲七黄色美女视频| 精品欧美一区二区三区在线| 在线观看66精品国产| 国产免费av片在线观看野外av| 啪啪无遮挡十八禁网站| 黄色a级毛片大全视频| 热re99久久精品国产66热6| av视频免费观看在线观看| 免费搜索国产男女视频| 成人18禁在线播放| 欧美亚洲日本最大视频资源| 色综合站精品国产| 欧美一级毛片孕妇| 国产亚洲精品久久久久久毛片| 在线永久观看黄色视频| 桃红色精品国产亚洲av| 人人妻人人添人人爽欧美一区卜| 男人舔女人的私密视频| 18禁美女被吸乳视频| 老司机午夜福利在线观看视频| 色婷婷久久久亚洲欧美| 亚洲,欧美精品.| 午夜精品在线福利| 国产熟女xx| 久久国产乱子伦精品免费另类| 国内毛片毛片毛片毛片毛片| 日韩欧美在线二视频| 少妇 在线观看| 日韩欧美三级三区| 亚洲人成伊人成综合网2020| 色老头精品视频在线观看| 两个人免费观看高清视频| 一边摸一边抽搐一进一小说| 精品国产乱子伦一区二区三区| 悠悠久久av| 成人免费观看视频高清| 人妻久久中文字幕网| 老熟妇乱子伦视频在线观看| 老汉色av国产亚洲站长工具| 国产区一区二久久| 一二三四在线观看免费中文在| 亚洲成人国产一区在线观看| 免费在线观看亚洲国产| 又黄又爽又免费观看的视频| 两个人免费观看高清视频| 午夜福利在线观看吧| 桃色一区二区三区在线观看| 欧美激情高清一区二区三区| 不卡av一区二区三区| 亚洲一区二区三区不卡视频| 国产亚洲精品久久久久久毛片| 成人三级黄色视频| 黄色视频不卡| 1024视频免费在线观看| 少妇的丰满在线观看| 人人妻人人添人人爽欧美一区卜| 变态另类成人亚洲欧美熟女 | ponron亚洲| 日韩欧美一区二区三区在线观看| 国产有黄有色有爽视频| 日韩有码中文字幕| 在线播放国产精品三级| 在线十欧美十亚洲十日本专区| 神马国产精品三级电影在线观看 | 国产成人精品久久二区二区91| 很黄的视频免费| 国产亚洲精品综合一区在线观看 | 黄片播放在线免费| 久久人人精品亚洲av| 91av网站免费观看| 国产精品 欧美亚洲| 丰满饥渴人妻一区二区三| 亚洲熟妇熟女久久| 中出人妻视频一区二区| 精品国产亚洲在线| 极品教师在线免费播放| 亚洲欧美精品综合久久99| 777久久人妻少妇嫩草av网站| 人人妻人人爽人人添夜夜欢视频| 女人精品久久久久毛片| av免费在线观看网站| 免费在线观看完整版高清| 国产精品美女特级片免费视频播放器 | 欧美成狂野欧美在线观看| 中文字幕av电影在线播放| 欧美日本亚洲视频在线播放| 夜夜爽天天搞| 美女福利国产在线| 欧美午夜高清在线| 日本一区二区免费在线视频| 欧美性长视频在线观看| 国产精品九九99| 国产精品亚洲av一区麻豆| 一级a爱视频在线免费观看| 天堂动漫精品| 老司机福利观看| 中文亚洲av片在线观看爽| 天堂动漫精品| 超色免费av| 满18在线观看网站| 国产野战对白在线观看| 国产高清激情床上av| 国产欧美日韩精品亚洲av| 亚洲精品中文字幕一二三四区| 久久精品91无色码中文字幕| 一区二区三区精品91| 在线av久久热| 亚洲avbb在线观看| 50天的宝宝边吃奶边哭怎么回事| 成人18禁高潮啪啪吃奶动态图| 一边摸一边抽搐一进一出视频| 久久国产乱子伦精品免费另类| 亚洲精品久久成人aⅴ小说| 欧美不卡视频在线免费观看 | bbb黄色大片| 欧美日韩精品网址| 伊人久久大香线蕉亚洲五| 亚洲精品中文字幕在线视频| 中文字幕精品免费在线观看视频| 免费日韩欧美在线观看| 曰老女人黄片| 国产无遮挡羞羞视频在线观看| 国产精品久久久av美女十八| 国产有黄有色有爽视频| 精品熟女少妇八av免费久了| 久久久精品国产亚洲av高清涩受| 国产单亲对白刺激| 久久久国产成人免费| 在线观看午夜福利视频| 制服诱惑二区| 在线观看日韩欧美| 国产在线精品亚洲第一网站| 丝袜在线中文字幕| 亚洲国产欧美一区二区综合| 精品高清国产在线一区| 88av欧美| 久久午夜亚洲精品久久| 国产成人欧美在线观看| 超色免费av| 老鸭窝网址在线观看| 午夜福利在线观看吧| 久久久精品国产亚洲av高清涩受| 99久久99久久久精品蜜桃| 成人三级黄色视频| 久久久精品欧美日韩精品| 久久久久久久午夜电影 | 亚洲一码二码三码区别大吗| 欧美成人性av电影在线观看| 亚洲精品中文字幕一二三四区| 波多野结衣一区麻豆| 黄片播放在线免费| 不卡一级毛片| 宅男免费午夜| 18禁裸乳无遮挡免费网站照片 | 亚洲免费av在线视频| 日韩有码中文字幕| 精品午夜福利视频在线观看一区| 热re99久久国产66热| 久久香蕉精品热| 久热这里只有精品99| 日韩人妻精品一区2区三区| 婷婷丁香在线五月| 国产免费男女视频| 免费av毛片视频| 欧美色视频一区免费| 国产成人av激情在线播放| 亚洲欧美日韩另类电影网站| 亚洲一区二区三区色噜噜 | 9191精品国产免费久久| 黄色成人免费大全| 黑人巨大精品欧美一区二区蜜桃| 国产免费现黄频在线看| 国产精品一区二区免费欧美| 伊人久久大香线蕉亚洲五| 久久天堂一区二区三区四区| 免费少妇av软件| 亚洲第一青青草原| 视频区欧美日本亚洲| 亚洲七黄色美女视频| 成年人黄色毛片网站| 亚洲精品国产色婷婷电影| 亚洲精品成人av观看孕妇| 国产精品国产av在线观看| 国产黄色免费在线视频| 国产精品一区二区免费欧美| 久久久久久人人人人人| 国产免费现黄频在线看| 国产精品 国内视频| 久久天躁狠狠躁夜夜2o2o| 色婷婷av一区二区三区视频| 成人影院久久| 亚洲一区中文字幕在线| 男人舔女人的私密视频| 啦啦啦 在线观看视频| 精品福利永久在线观看| e午夜精品久久久久久久| 免费在线观看日本一区| 夜夜看夜夜爽夜夜摸 | 精品久久久久久,| 亚洲成av片中文字幕在线观看| 黑人巨大精品欧美一区二区蜜桃| 亚洲人成网站在线播放欧美日韩| 97人妻天天添夜夜摸| 午夜激情av网站| 亚洲精品在线观看二区| 国产av精品麻豆| 一进一出抽搐动态| 看免费av毛片| 身体一侧抽搐| 国产欧美日韩一区二区三| 午夜福利在线免费观看网站| 色综合站精品国产| 亚洲国产欧美一区二区综合| 精品久久久久久电影网| 色尼玛亚洲综合影院| 九色亚洲精品在线播放| 精品人妻在线不人妻| 超色免费av| 亚洲精品av麻豆狂野| 涩涩av久久男人的天堂| 日韩欧美免费精品| 亚洲男人的天堂狠狠| 国产成+人综合+亚洲专区| 欧美精品亚洲一区二区| 波多野结衣高清无吗| 18禁国产床啪视频网站| 电影成人av| 亚洲va日本ⅴa欧美va伊人久久| 色播在线永久视频| 不卡av一区二区三区| 一二三四在线观看免费中文在| 一级a爱片免费观看的视频| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲在线自拍视频| netflix在线观看网站| 日韩中文字幕欧美一区二区| 日本黄色视频三级网站网址| av中文乱码字幕在线| 亚洲熟妇熟女久久| 久久香蕉激情| 久久国产精品人妻蜜桃| 老熟妇仑乱视频hdxx| 黑丝袜美女国产一区| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲人成网站在线播放欧美日韩| 国产午夜精品久久久久久| 首页视频小说图片口味搜索| 色婷婷久久久亚洲欧美| 欧美日韩亚洲综合一区二区三区_| 窝窝影院91人妻| 免费在线观看完整版高清| 法律面前人人平等表现在哪些方面| 色在线成人网| 一级作爱视频免费观看| 成人永久免费在线观看视频| 人人妻,人人澡人人爽秒播| 国产一区二区三区视频了| 国产91精品成人一区二区三区| 叶爱在线成人免费视频播放| av片东京热男人的天堂| 久久精品亚洲熟妇少妇任你| 美女大奶头视频| 久久久久九九精品影院| 看片在线看免费视频| 免费少妇av软件| 欧美成狂野欧美在线观看| 亚洲一区二区三区不卡视频| 国产91精品成人一区二区三区| 免费在线观看亚洲国产| 好看av亚洲va欧美ⅴa在| 在线视频色国产色| 国产精品日韩av在线免费观看 | 男人舔女人下体高潮全视频| 免费女性裸体啪啪无遮挡网站| 久久精品亚洲av国产电影网| 久久人人爽av亚洲精品天堂| 亚洲少妇的诱惑av| 一级片免费观看大全| 欧美最黄视频在线播放免费 | 日本a在线网址| 免费久久久久久久精品成人欧美视频| 美国免费a级毛片| 在线十欧美十亚洲十日本专区| 国产一区在线观看成人免费| 黄色视频,在线免费观看| 国产单亲对白刺激| 国产成+人综合+亚洲专区| 巨乳人妻的诱惑在线观看| 免费人成视频x8x8入口观看| av国产精品久久久久影院| 99国产极品粉嫩在线观看| 亚洲国产欧美一区二区综合| 亚洲中文日韩欧美视频| 国产不卡一卡二| 波多野结衣一区麻豆| www.熟女人妻精品国产| 亚洲av片天天在线观看| av天堂在线播放| av有码第一页| 国产精品成人在线| 久久久久久久久中文| 国产一区在线观看成人免费| av福利片在线| 亚洲精品av麻豆狂野| 一二三四在线观看免费中文在| 日韩成人在线观看一区二区三区| 日本免费a在线| 国产精品亚洲av一区麻豆| 久久人妻av系列| 自线自在国产av| 51午夜福利影视在线观看| 免费在线观看日本一区| 亚洲国产毛片av蜜桃av| 久久人妻熟女aⅴ| 首页视频小说图片口味搜索| 国产成+人综合+亚洲专区| 久久狼人影院| 18美女黄网站色大片免费观看| 多毛熟女@视频| 999精品在线视频| 亚洲精品国产精品久久久不卡| 免费av中文字幕在线| 国产亚洲av高清不卡| 亚洲狠狠婷婷综合久久图片| 女警被强在线播放| 多毛熟女@视频| 国产激情欧美一区二区| www.自偷自拍.com| 亚洲五月天丁香| 正在播放国产对白刺激| 夜夜看夜夜爽夜夜摸 | 1024视频免费在线观看| 亚洲精品av麻豆狂野| 三上悠亚av全集在线观看| 精品国产美女av久久久久小说| 久久久国产成人免费| 丝袜美足系列| 午夜福利欧美成人| 亚洲色图 男人天堂 中文字幕| 好看av亚洲va欧美ⅴa在| 久久精品国产清高在天天线| 亚洲情色 制服丝袜| 国产91精品成人一区二区三区| 大码成人一级视频| 亚洲午夜精品一区,二区,三区| 日本免费一区二区三区高清不卡 | 妹子高潮喷水视频| 国产av一区二区精品久久| 这个男人来自地球电影免费观看| 午夜日韩欧美国产| 黄片小视频在线播放| 91老司机精品| 淫秽高清视频在线观看| 91九色精品人成在线观看| 嫩草影院精品99| 一级毛片精品| 日韩成人在线观看一区二区三区| 视频区欧美日本亚洲| 久久国产精品人妻蜜桃| 身体一侧抽搐| 99国产综合亚洲精品| 午夜成年电影在线免费观看| 午夜精品国产一区二区电影| 老汉色av国产亚洲站长工具| 欧美另类亚洲清纯唯美| 在线视频色国产色| 精品无人区乱码1区二区| 乱人伦中国视频| 狂野欧美激情性xxxx| 午夜a级毛片| 国产单亲对白刺激| 99久久人妻综合| 国产精华一区二区三区| 一本综合久久免费| 一级a爱片免费观看的视频| 亚洲黑人精品在线| 欧美大码av| 亚洲伊人色综图| 一个人免费在线观看的高清视频| 亚洲中文av在线| 亚洲第一青青草原| 啦啦啦在线免费观看视频4| 日韩视频一区二区在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 黑人巨大精品欧美一区二区蜜桃| 成人特级黄色片久久久久久久| 亚洲全国av大片| 青草久久国产| a级毛片黄视频| 国产午夜精品久久久久久| 欧美成人免费av一区二区三区| 少妇粗大呻吟视频| 久久久久久久精品吃奶| av国产精品久久久久影院| 久久国产乱子伦精品免费另类| 欧美人与性动交α欧美精品济南到| 国产精品免费视频内射| 国产精品成人在线| 亚洲熟女毛片儿| 久久精品成人免费网站| 一级毛片精品| 欧美av亚洲av综合av国产av| 中国美女看黄片| 91老司机精品| 久久国产乱子伦精品免费另类| 18禁裸乳无遮挡免费网站照片 | 91字幕亚洲| 欧美日韩乱码在线| 成年女人毛片免费观看观看9| 99久久99久久久精品蜜桃| 久久狼人影院| 桃色一区二区三区在线观看| 午夜精品国产一区二区电影| 国产av又大| 亚洲va日本ⅴa欧美va伊人久久| 国产真人三级小视频在线观看| 男人操女人黄网站| 美女福利国产在线| 在线观看免费视频网站a站| 亚洲一卡2卡3卡4卡5卡精品中文| 国产伦一二天堂av在线观看| 国产欧美日韩精品亚洲av| 亚洲精品国产一区二区精华液| 男人舔女人的私密视频| 免费搜索国产男女视频| 国产成人精品久久二区二区免费| 久热这里只有精品99| 国产成人一区二区三区免费视频网站| 激情视频va一区二区三区| 亚洲国产精品一区二区三区在线| 搡老熟女国产l中国老女人| 国产成人系列免费观看| 欧美日韩视频精品一区| 午夜福利影视在线免费观看| 国产精品国产av在线观看| x7x7x7水蜜桃| 超碰97精品在线观看| 久久久久久久精品吃奶| 欧美性长视频在线观看| 淫秽高清视频在线观看| 亚洲情色 制服丝袜| 免费少妇av软件| 大香蕉久久成人网| 99国产精品一区二区三区| 美女高潮喷水抽搐中文字幕| 精品久久蜜臀av无| 日韩成人在线观看一区二区三区| 性少妇av在线| 午夜两性在线视频| 久久99一区二区三区| 一级毛片精品| 国产av一区二区精品久久| 精品欧美一区二区三区在线| 国产精品一区二区在线不卡| 国产精品影院久久| 后天国语完整版免费观看| 老司机深夜福利视频在线观看| 国产精品自产拍在线观看55亚洲| 亚洲精品国产色婷婷电影| 无遮挡黄片免费观看| 亚洲人成伊人成综合网2020| 国产欧美日韩一区二区三区在线| 嫁个100分男人电影在线观看| 久久国产精品影院| 国产精品 国内视频| 国产精品久久久久成人av| 黄色视频,在线免费观看| 国产在线精品亚洲第一网站| 精品久久久久久,| 一进一出抽搐动态| 国产一区二区三区视频了| 中亚洲国语对白在线视频| 国产精品免费视频内射| 色精品久久人妻99蜜桃| 亚洲午夜精品一区,二区,三区| 久久精品人人爽人人爽视色| 久久影院123| 一a级毛片在线观看| 性欧美人与动物交配| 女人被躁到高潮嗷嗷叫费观| 69精品国产乱码久久久| 在线观看www视频免费| 真人一进一出gif抽搐免费| 999精品在线视频| 欧美人与性动交α欧美精品济南到| 日本黄色视频三级网站网址| 人成视频在线观看免费观看| 好看av亚洲va欧美ⅴa在| 国产一区二区三区综合在线观看| 伦理电影免费视频| 国产不卡一卡二| 亚洲avbb在线观看| 黄色丝袜av网址大全| 日韩欧美一区二区三区在线观看| 婷婷丁香在线五月| 亚洲精品在线观看二区| 日本wwww免费看| 老熟妇仑乱视频hdxx| 欧美精品一区二区免费开放| 亚洲精品国产区一区二| 欧美日韩视频精品一区| 每晚都被弄得嗷嗷叫到高潮| 国产精品偷伦视频观看了| 中文字幕精品免费在线观看视频| 亚洲精品国产精品久久久不卡| 丰满迷人的少妇在线观看| av视频免费观看在线观看| 欧美久久黑人一区二区| 午夜福利,免费看| 亚洲精品久久午夜乱码| 他把我摸到了高潮在线观看| 久久久久精品国产欧美久久久| 精品国产美女av久久久久小说| 成人18禁在线播放| 日本欧美视频一区| 久久久国产一区二区| 99re在线观看精品视频| 人妻久久中文字幕网| 亚洲美女黄片视频| 精品久久蜜臀av无| 一a级毛片在线观看| 欧美久久黑人一区二区| 国产欧美日韩精品亚洲av| 夜夜躁狠狠躁天天躁| 成人亚洲精品一区在线观看| 无限看片的www在线观看| 51午夜福利影视在线观看| 久久久久国产精品人妻aⅴ院| 一夜夜www| 欧美不卡视频在线免费观看 | 国产一区二区三区综合在线观看| 久久 成人 亚洲| 亚洲av第一区精品v没综合| 国产视频一区二区在线看| 视频区图区小说| 国产欧美日韩一区二区三| 99国产精品一区二区三区| 国产极品粉嫩免费观看在线| 少妇被粗大的猛进出69影院| 一本综合久久免费| 久99久视频精品免费| 18禁国产床啪视频网站| 色婷婷久久久亚洲欧美| 国产成人一区二区三区免费视频网站| 亚洲自拍偷在线| 亚洲精品国产色婷婷电影| 老汉色av国产亚洲站长工具| 国产1区2区3区精品| 极品教师在线免费播放| 欧美国产精品va在线观看不卡| 亚洲七黄色美女视频| 亚洲欧美精品综合一区二区三区| 亚洲一区二区三区欧美精品| 日本黄色视频三级网站网址|