• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Educational Videos Subtitles’Summarization Using Latent Dirichlet Allocation and Length Enhancement

    2022-03-14 09:28:08SarahAlrumiahandAmalAlShargabi
    Computers Materials&Continua 2022年3期

    Sarah S.Alrumiahand Amal A.Al-Shargabi

    Department of Information Technology,College of Computer,Qassim University,Buraydah,51452,Saudi Arabia

    Abstract: Nowadays,people use online resources such as educational videos and courses.However, such videos and courses are mostly long and thus,summarizing them will be valuable.The video contents (visual, audio, and subtitles)could be analyzed to generate textual summaries,i.e.,notes.Videos’subtitles contain significant information.Therefore, summarizing subtitles is effective to concentrate on the necessary details.Most of the existing studies used Term Frequency-Inverse Document Frequency (TF-IDF) and Latent Semantic Analysis (LSA) models to create lectures’summaries.This study takes another approach and applies Latent Dirichlet Allocation(LDA),which proved its effectiveness in document summarization.Specifically,the proposed LDA summarization model follows three phases.The first phase aims to prepare the subtitle file for modelling by performing some preprocessing steps,such as removing stop words.In the second phase,the LDA model is trained on subtitles to generate the keywords list used to extract important sentences.Whereas in the third phase, a summary is generated based on the keywords list.The generated summaries by LDA were lengthy;thus,a length enhancement method has been proposed.For the evaluation,the authors developed manual summaries of the existing“EDUVSUM”educational videos dataset.The authors compared the generated summaries with the manual-generated outlines using two methods,(i)Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and(ii)human evaluation.The performance of LDA-based generated summaries outperforms the summaries generated by TF-IDF and LSA.Besides reducing the summaries’length,the proposed length enhancement method did improve the summaries’precision rates.Other domains,such as news videos,can apply the proposed method for video summarization.

    Keywords: Subtitle summarization; educational videos; topic modelling;LDA; extractive summarization

    Abbreviations:

    AI Artificial Intelligence

    ASR Automatic Speech Recognition

    BERT Bidrirectional Encoder Representations from Transformers

    CV Computer Vision

    EDUVSUM The name of an Educational Videos Summaries dataset

    IDF Inverse Document Frequency

    LDA Latent Dirichlet Allocation

    LSA Latent Semantic Analysis

    ML Machine Learning

    MMLDA MultiModal LDA

    NLP Natural Language Processing

    NTM Neural Topic Model

    ROUGE Recall-Oriented Understudy for Gisting Evaluation

    ROUGE-1 ROUGE measurement that measures the overlap of unigrams

    ROUGE-2 ROUGE measurement that measures the overlap of bigrams

    ROUGE-L ROUGE measurement that measures the longest matching sentences

    TF-IDF Term Frequency-Inverse Document Frequency

    1 Introduction

    With the availability of online learning, i.e., learning through the internet [1], educational videos’generation increased.Educational videos vary in duration, content, and presentation styles.For instance, lectures’videos usually present the subject’s PowerPoint slides and have long durations (more than 30 mins).In contrast, kids’educational videos are often short (1 to 5 mins)with attractive animations.Even though long educational videos contain great information, people usually avoid watching them [2].Searching for a piece of knowledge in a long video is challenging,as it requires lots of time.Thus, there is a need for techniques to summarize such videos.

    Summarizing educational videos benefits learners by saving their time and storage space and making the searching and indexing process quick and easy [3-5].A video can be summarized based on its audio [3,5-7], visual [3,5,7,8], and textual [3,5-14] data, i.e., subtitles.The video’s audio is summarized by converting the spoken words to text using speech-to-text techniques and applying text-based summarization methods [6].Besides, summarizing a video based on its visual content to a textual form using different techniques, such as video and image tagging and captioning [15,16].Whereas in subtitles summarization, the textual data is summarized using textbased summarization algorithms [9].On the other hand, the generated summaries could be either a short video [17] or a text [3-14,16,18].

    In textual summarization, the input (need to be summarized) and output (generated summary)are in textual form.The textual outlines can be classified based on the number of input documents to a single document and multi-documents [19].Moreover, textual summarization is classified based on the used algorithm for abstractive and extractive summarization [9].The abstractive summarization summarizes a document like human summarization using external vocabulary and paraphrasing [4].In contrast, extractive summarization summarizes a document by extracting the exact sentences that are considered significant by calculating each sentence’s frequency and importance score [3,5-14].Tab.1 compares the two summarization techniques.

    This paper aims to summarize educational videos to save learners’time and resources and provide quick and straightforward searching and indexing processes.The main focus is on summarizing subtitles because in most of the lectures, the visual content, e.g., slides, are pre-given to students and most educational websites provide videos’ transcripts.Therefore, in lectures’videos, the significant concentration is on the spoken sentences present in the video’s subtitles and transcripts.This study uses subtitle files available in the “EDUVSUM” educational videos’dataset [17].Extractive text summarization helps extract valuable sentences.Therefore, single document extractive summarizations are applied to videos’subtitles.Regarding the scientific content of courses and lectures, people need exact sentences without paraphrasing.Additionally, the current work focuses on generating text summaries as most students prefer referring to the lecture’s textual notes [20].Fig.1 illustrates the scope of this work highlighted in blue.

    Table 1: A comparison between abstractive and extractive summarization

    Figure 1: The scope of the study

    Based on the literature, we can derive the following observations:

    (1) Educational videos’datasets are limited and not sufficient for summarization purposes.The available datasets mostly include short clips and miss subtitles or transcripts and require major manual preprocessing.Moreover, to the best of our knowledge, educational videos’datasets lack the human-generated summaries necessary for the model evaluation process.

    (2) Although Latent Dirichlet Allocation (LDA) recorded the highest performance in summarizing documents, it was not applied in videos’subtitles summarization.However, different algorithms were applied.These include Latent Semantic Analysis (LSA) [3,7], Term Frequency-Inverse Document Frequency (TF-IDF) [9,10], and Bidirectional Encoder Representations from Transformers (BERT) [11,13,17].

    (3) LDA-based document summaries always tend to be lengthy [5].

    Thus, this work proposes the use of LDA to summarize educational videos based on their subtitles.The main contributions of this study are:

    (1) Extending the “EDUVSUM” educational videos’dataset [17] by generating human summaries from subtitle files, as the original dataset only includes videos and subtitles; without human summaries.

    (2) A proposed summarization method based on LDA.

    (3) A proposed method for enhancing summaries in terms of length and quality.

    The two proposed methods mentioned above were validated using experiments.

    This paper is structured as follows: Section 2 illustrates some of the related works.Section 3 presents the materials and methodology.Section 4 shows the results, Section 5 discusses the study’s outcomes and Section 6 concludes the study.

    2 Related Work

    Many researchers have applied data mining techniques to enhance the education field, such as analyzing students’performance [21] and studying the discoursal identity in postgraduates’academic writings [22].On the other hand, the scope of this study focuses on the extractive summarization of educational videos.Extractive summarization extracts valuable information without modification [23].For instance, extractive text summarization summarizes a document by selecting the important sentences.Extractive text summarization starts with accepting an input document and preprocess the document, such as removing punctuations and stop-words [19].Then feature extraction, e.g., a bag of words, is applied to the preprocessed text.After that, sentences are represented using vector representation as an example.Finally, based on some algorithms and criteria, sentence selection is applied to generate the summary.Fig.2 summarizes the extractive text summarization process.

    Figure 2: Extractive text summarization process

    This section discusses the previous work done to summarize educational videos, which can be divided into three categories (i) video summarization based on audio, visual scenes,and subtitles [3,5,7,8,16-18], (ii) audio-only outlines [6], and (iii) subtitles-only summarization[3,5-14].Speech recognition faces some challenges when generating text summaries [6], e.g., lack of punctuations [9].This study focuses only on the videos’and subtitles’summarization due to audio-related issues and the availability of online subtitles’generation tools.Moreover, subtitles are considered as documents.Therefore, this section discusses some of the document extractive summarization efforts.The works discussed here were selected based on their relevancy to the study’s scope.

    2.1 Summarization Based on Audio,Visual,and Subtitles

    Aggregating the Natural Language Processing (NLP), Automatic Speech Recognition (ASR),and Computer Vision (CV) techniques assisted researchers in generating video summaries [5].This section discusses some of the applied video summarization methods.

    In [17], the authors analyzed audio, visual, and textual contents of 98 educational videos to assign scores to important video segments.The authors used a python-based method to extract audio features.Moreover, researchers used Xception, ResNet-50, VGG-16 and Inception-v3 to extract the visual features and BERT for textual features.An annotation tool and EDUVSUM dataset, i.e., a dataset of annotated educational videos, were generated.Researchers concluded that visual features are not well suited for the academic domain.

    Additionally, multimedia news was summarized using Multimodal LDA (MMLDA) [5].Furthermore, video summarization has been applied to movies [8].Researchers in [8] summarized 156 scenes of the Forrest Gump movie based on scenes’description and subtitles analysis.The authors found that subtitles have a positive effect on generating summaries.

    Other studies focused on summarizing long videos to short visual scenes and textual summaries [7].Researchers applied Luhn, LSA, LexRank, and TextRank algorithms to evaluate the best algorithm by summarizing one-hour long videos.After assessing the generated summaries with Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and human comparison, the authors recorded that LSA and LexRank provided the best results compared with other methods.

    Authors in [3] created textual video summaries by first generating subtitles using speech recognition techniques, then applying text summarization algorithms based on NLP.Like [7],researchers in [3] used Luhn, LSA, LexRank, and TextRank to generate textual summaries.Their results agreed with [7] that LSA performed well and had the best contribution.

    Learning how to directly map a sequence of video frames, i.e., scenes, to a series of words to generate video captions was studied in [16].As a result, developing video descriptions is challenging as it is difficult to determine the salient content and describe it appropriately.

    On the other hand, authors in [18] generated fixed-length textual summaries from asynchronous videos, i.e., videos without descriptions and subtitles.Researchers used LexRank and Neural Topic Model (NTM) to produce textual summaries of five to ten minutes’news videos.Researchers reported that videos transcripts are essential to generate summaries, whereas audio and visual contents have a limited effect on summarization performance.

    2.2 Summarization Based on Subtitles Only

    Furthermore, some studies focused on subtitles summarization to generate textual summaries of educational videos.This section presents some of the applied subtitles summarization methods.

    Researchers in [9] applied lectures’subtitles summarization of fixed-length sentences to avoid the miss-identified punctuation marks in the speech-to-text summaries.TF-IDF was used to find meaningful sentences.The authors concluded that outline is effective when punctuations are part of subtitles.

    Another research that produced lectures videos summarization by analyzing subtitles using TF-IDF [10].Authors in [10] treated each sentence as a document and generated a summary of sentences with a threshold equal to the average TF-IDF value of all sentences.Thus, based on the IDF relevancy term, the lesser the term’s occurrence, the higher its importance.In conclusion,researchers found that extractive text summarization reduced the original content by 60%.Further,removing stop-words does not affect the produced summary.

    Moreover, a cloud-based lecture summarization service was generated [11].Researchers used BERT to generate lectures subtitles summaries.Besides the summarization service, the system also provided storage and management services.K-mean cluster was used to assist in the summary selection by identifying the closest sentence to the centroid.An extended work of [11] is presented in [13], where authors added a dynamic method to select the appropriate number of clusters besides using BERT to produce summaries.Depending on the size of the cluster, the generated summary overcame the drawback of generating short summaries.However, most of the humangenerated outlines contained three to four sentences which cannot be taken as a standard for dynamically predicting the number of sentences in summary.

    As subtitles consider as documents when analyzing them, the following paragraphs illustrate the extractive document summarization works.Document summarization is classified based on the number of input documents to a single document and multi-documents.

    Authors in [12] generated the MUSEEC tool to summarize documents.MUSEEC is an extractive text summarization tool that consists of MUSE, POLY, and WECOM methods.MUSE provided supervised learning extractive summarization, while POLY produced unsupervised learning summaries.WECOM was used to shorten sentences.Furthermore, MUSEEC is a languageindependent summarization tool.Thus, MUSEEC has been used and improved in other studies,such as movie summarization [8].

    Additionally, others combined the power of topic modelling with the simplicity of extractive summarization to produce document summaries [14].LDA proved its effectiveness in generating summaries as it improved the TF-IDF results.Nevertheless, using topic modelling induce the need for pre-determined topic specifications.Furthermore, authors in [24] combined topic modelling using LDA with classification techniques to generate documents summaries.Topic modellingbased document summarization deals with some challenges, such as out of control output and the possibility of missing some expected topics [5].However, summarizing documents with topic modelling algorithms generated good summaries.

    From the literature, we can see that LSA recorded high results when applied to videos summarization.In comparison, studies that focused on subtitles-based outlines used TF-IDF and BERT only.Moreover, although LDA proved its effectiveness in summarizing documents, it has not been applied to subtitles summarization.Regardless of structural differences, both a document file and a subtitle file contain a collection of sentences.Therefore, this work aims to apply topic modelling using LDA on educational videos subtitles to generate lectures’summaries.

    3 Materials and Methods

    This section discusses the materials and methods used to obtain the study’s results.The proposed LDA-based subtitles summarization model and dataset details are presented, as well as the proposed summaries’ length enhancement method and the evaluation methods.Fig.3 illustrates the followed methodology phases.

    Figure 3: The study’s methodology phases

    3.1 Dataset Expansion

    This work used the educational videos’subtitle files from EDUVSUM [17] dataset.EDUVSUM is an English educational videos dataset containing 98 MP4 videos with 98 subtitles files of each video [17].All videos presented in the dataset are in English.The videos describe topics in various fields, such as computer science, chemistry, and biology.Additionally, all videos have a duration of fewer than 16 min; see Tab.2 for more information about videos durations.

    Table 2: EDUVSUM datasets videos count along with their duration

    Considering ten minutes length videos as short educational videos, authors excluded less than ten minutes long videos.Moreover, the authors developed manual summaries for the selected 26 videos, as the used dataset, i.e., EDUVSUM, did not provide subtitles’summaries.Authors created summaries’files by selecting the valuable sentences in the original subtitles’files.The manualgenerated outlines were about 50-55% of the original subtitles’files, i.e., the original subtitles’contents were reduced to half in the manual summaries.On the other hand, the dataset’s subtitle files’structure needed some manual preprocessing and modification due to using an online tool in generating the existing subtitles in [17].Therefore, the authors removed time ranges and merged the sentences manually.During the experiments, the authors used a sample of five videos out of the selected 26 videos.

    3.2 LDA-Based Subtitles Summarization Model

    Summarizing subtitles is similar to documents summarization, where the focus is on valuable information, i.e., sentence, to add it in the summary.In this paper, the use of subtitles and documents is interchangeable.An input document (d) is a set of (n) number of sentences (s),d={s1,s2,...,sn}.Moreover, a sentence is a collection of words (w),s={w1,w2,...,wn}.To summarize a subtitle file, the authors extracted the informative sentences from the original document using a topic modelling technique.Topic modelling is an unsupervised learning approach that aims to organize and classify textual data by discovering words’patterns [25].

    Additionally, identifying keywords, i.e., topics, help in determining valuable sentences.For instance, in an Artificial Intelligence (AI) lecture, the essential information could contain keywords,such as AI, Machine Learning (ML), supervised, unsupervised, modelling, etc.To extract the crucial words, the authors used LDA.LDA is a topic modelling method that uses a statistical approach to discover important topics by analyzing the document’s words [26].

    LDA is a Bayesian-based model that decomposes a document into a set of topics [26].LDA uses a vector of random multinomial parameters (θ) in (n) documents.The distribution ofθis affected by hyperparameters,αand B matrix.B matrix illustrates the relationship between the document’s discrete topic variableszijand wordswij.Eq.(1) shows the LDA probability of a list of words (W) [26].

    The probability of a set of recognized words (W) depends onαand B matrix values.Moreover, d is an index of a document with (m) words.Fig.4 presents a graphical LDA model [26].

    Figure 4: Graphical representation of the proposed LDA model

    Fig.5 presents the proposed LDA summarization model framework.The subtitles’su mmarization process starts with acquiring a single subtitle file.Then preprocess the subtitle file, i.e.,removing punctuations, converting letters to lowercase, splitting the document into sentences,removing stop words, and creating an id2word dictionary and corpus.The corpus, id2word dictionary, and the number of topics (in our case, one topic) are passed to the LDA model to generate the topic’s keywords list.The keywords list contains ten words considered the most important words when extracting and selecting the summary’s sentences.Any sentence that includes one of the ten words is added to the output summary.Then, the generated summary is compared with the human outlines using some evaluation methods.

    Figure 5: LDA-based subtitle summarization model framework

    This work developed a single document summarization model.Each subtitle file is treated as a single document.Additionally, to identify the best number of topics, the authors used Grid Search, an optimization algorithm besides the Scikit-learn LDA model [27].The grid search results with the best estimator of one topic, which is reasonable as a lecture usually describe a specific topic.Therefore, the summarization model accepts a single subtitle file and generates a single topic that describes the lecture’s contents.

    The Scikit-learn’s based LDA model does not link words with the generated output; only topic statistics [27].Thus, the authors implemented a Gensim-based LDA model.Gensim is a topic modelling Python library that is human-readable as it outputs the generated topics [28].One topic in Gensim contains ten words.Those words are used as keywords to extract sentences to appear in the generated summary.

    3.3 Summaries’Length Enhancement

    As LDA dynamically generates topics that differ from one run to another, the output summary’s length varies and cannot be controlled.Moreover, LDA tends to generate lengthy summaries.Thus, the authors implemented a method to reduce the number of selected sentences by reducing the keywords list.The keywords list contains the ten words that the Gensim LDA model generated.To minimize the keywords list, non-noun words, e.g., verbs, number words, etc.,were removed.

    3.4 Model Evaluation

    The LDA-based generated summaries are compared with manual-generated outlines.Moreover, TF-IDF and LSA models were implemented to compare their performance with the proposed LDA model on the same subtitles’files.TF-IDF determines the relative words in a document by calculating the term frequency and inverse document frequency score for each word [29].The sentence importance score is calculated based on the terms’scores of that sentence.Contrarily, LSA is a method that extracts the meanings and semantics of words in a document [30].A Sumy-based LSA model was used.Sumy is a Python package for extractive text summarization with a flexible feature of specifying the number of generated sentences [31].

    ROUGE and human evaluation were used to evaluate the LDA-based generated summaries.ROUGE assesses the generated summaries by comparing them with the human-generated summaries.ROUGE contains some measurements, such as ROUGE-1, ROUGE-2, and ROUGE-L.For instance, ROUGE-1 measures the overlap of unigrams, while ROUGE-2 measures bigrams overlap.Additionally, ROUGE-L measures the longest matching sentences.However, ROUGE mainly focuses on comparing word sequences; therefore, human assessment is needed to evaluate sentences that existed in both the generated summary and the human summary.Human evaluation is done by calculating the number of sentences in summaries that matched the human outline,dividing it by the total number of sentences in the generated summary.

    4 Results

    This section presents the experimental results of the subtitles summarization.The summarization process consisted of three phases, (i) dataset expansion phase, (ii) LDA-based subtitles summarization phase, and (iii) summaries’ length enhancement phase.The following sections illustrate the results regarding each phase.

    4.1 Dataset Expansion Phase

    As mentioned in Section 3.1, the subtitles files needed manual preprocessing before being summarized.Therefore, the subtitles files were manually preprocessed by removing time ranges and merging sentences, as shown in Fig.6.On the other hand, the authors generated manual summaries for evaluation purposes.As a result, the current expanded version of the EDUVSUM dataset includes 26 processed subtitle files and 26 manual summaries along with the original videos and subtitle files.

    Figure 6: Sample of a subtitle file portion in the EDUVSUM dataset and the preprocessed version

    4.2 LDA-Based Subtitles Summarization Phase

    After obtaining processed subtitle files in the dataset expansion phase, the subtitle files are ready to be summarized.To summarize subtitles using the proposed LDA approach, the authors implemented a Python-based script.The summarization phase started with obtaining a subtitle file as an input.Then preprocess the subtitle file for summarization by lowercase all letters and removing punctuations.Additionally, to prepare the data for LDA, authors split the subtitle file into sentences and each sentence is divided into its word creating a list of sentences’words.After that, an id2word dictionary is created that contained identification numbers for each word.Then,a corpus of sentences is generated.The corpus represented the term frequency for each word in a sentence using the word’s id.The Gensim LDA model is then generated by passing the corpus,id2word dictionary, and the number of topics, i.e., based on Section 3.2, the num_topics = 1.The Gensim-based LDA represent the topic by outputting a list of 10 keywords, see Fig.7 (note that word cloud was used for visualization purposes only).The authors extracted the ten keywords and used them to select the sentences included in the summary, as shown in Fig.8.

    After developing subtitles’summaries with the LDA model and comparing LDA performance with TF-IDF and LSA (see Fig.9), Tab.3 describes the ROUGE and manual evaluation results.Tab.3 shows that LDA-based summaries recorded the highest ROUGE scores and human evaluation as well.Moreover, the number of sentences in the generated summaries is critical in the evaluating process.Therefore, Tab.4 shows the average number of sentences in the TF-IDF,LSA, and LDA summaries compared with human-generated summaries and the mean sentences in the original subtitles’files.LSA-based summaries produced the exact number of sentences in the human-generated summaries due to the LSA model’s flexibility in controlling the number of sentences in the output summaries.In comparison, TF-IDF developed the shortest outlines.However, LDA-based summaries included the highest number of sentences.

    Figure 7: Sample of the LDA-based subtitles summarization steps

    4.3 Summaries’Length Enhancement Phase

    To enhance LDA-based lengthy summaries, the authors excluded non-noun words from the keywords list, as explained in Section 3.3 and Figs.10 and 11.Tab.5 compares the ROUGE scores and human evaluation of LDA summaries before and after applying the length enhancement method.The precision rate increased in the enhanced LDA summaries as the sentences’length decreased, as shown in Tab.6.

    Figure 8: Sample of the LDA-based subtitles summarization output

    5 Discussion

    Based on our results, the performance of LDA-based subtitles summarization surpassed the existing TF-IDF and LSA models.As LDA generated a keywords list of the lecture’s topic, the LDA-based summarization model focused on sentences that contain the lecture’s important words.However, LDA summaries were the longest in terms of sentences.A large number of sentences in the generated summaries could affect ROUGE scores.Thus, the authors tried to enhance summaries’length by eliminating non-noun words from the LDA generated keywords list.The length enhancement method improved the precision performance; as the number of non-relevant sentences reduces, the precision rate increases.However, the length enhancement approach based on nouns may not be suitable for topics that majorly consider numbers, Booleans, and verbs.

    On the other hand, controlling the length of generated summaries in TF-IDF and LDA is challenging.In contrast, the flexibility of the LSA model resulted in summaries with a number of sentences equal to the number of sentences in the human-generated summaries.Moreover,using TF-IDF to summarize educational content is insufficient because TF-IDF focuses on the less appearing words and sentences.Where in lectures, the most repeated words are considered necessary.Furthermore, the enhanced LDA summaries overpass LSA-based summaries in terms of ROUGE scores and human evaluation.To sum up, LDA proved its effectiveness in summarizing educational subtitles’files.

    Figure 9: Sample of the subtitles summaries evaluation

    Table 3: ROUGE scores and human evaluation of subtitles’summarization

    Table 4: The average number of sentences in the generated summaries

    Figure 10: Sample of LDA keywords list reduction

    Figure 11: Sample of the length enhancement process

    Table 5: LDA subtitles’summarization results with length enhancement

    Table 6: The average number of sentences in LDA-based summaries with length enhancement

    6 Concluding Remarks

    Learners spent a lot of their time watching long educational and lecture videos.Summarizing long videos in textual form can be effective.Thus, to increase learning effectiveness and reduce the learning time, the authors implemented an LDA-based subtitles summarization model.The subtitles of educational videos were summarized using an LDA-generated keywords list.Regarding the results, LDA recorded the highest performance compared with LSA and TF-IDF summarization models.Furthermore, reducing LDA summaries’length by extracting non-noun words from the keywords list did improve the LDA precision rate and human evaluation.

    Students in any field can use the proposed work to generate lectures’summaries.Moreover,the authors encourage interested researchers to consider applying document-based analysis in videos’textual contents.The proposed model could be applied in other domains, such as news videos.Further, in the educational context, the type and contents of a lecture could affect the generated summaries, e.g., a quiz or test-related details are essential even though they are out of the lecture’s topic scope.On the other side, based on personal perspectives, the humangenerated summaries in this work could differ from one to another.Therefore, the model-based summarization performance could be affected.

    Nevertheless, controlling the length of LDA-based generated summaries could be considered in the future.Moreover, working on punctuations and nouns’properties, e.g., singular and plural,could be a future improvement aspect.Additionally, in the educational domain, repeated words could be important.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    成人一区二区视频在线观看| 成人特级av手机在线观看| 三级男女做爰猛烈吃奶摸视频| 九色国产91popny在线| 久久久久久久精品吃奶| 亚洲成人久久性| 首页视频小说图片口味搜索| 国产真实伦视频高清在线观看 | 啦啦啦观看免费观看视频高清| 久久香蕉国产精品| 不卡一级毛片| 国产精品,欧美在线| 日本一本二区三区精品| 一边摸一边抽搐一进一小说| 国产精品女同一区二区软件 | 免费大片18禁| 亚洲精品成人久久久久久| 亚洲av成人不卡在线观看播放网| 久久精品国产综合久久久| 亚洲av第一区精品v没综合| 国产高清视频在线播放一区| 国产在线精品亚洲第一网站| 亚洲aⅴ乱码一区二区在线播放| 在线视频色国产色| 特级一级黄色大片| 欧美中文综合在线视频| 久久亚洲真实| 国产一区二区激情短视频| 97人妻精品一区二区三区麻豆| 国产精品乱码一区二三区的特点| 最新美女视频免费是黄的| 久久久久国内视频| 午夜a级毛片| 国产精品1区2区在线观看.| 丰满人妻熟妇乱又伦精品不卡| 国产精华一区二区三区| 久久久国产成人精品二区| 99riav亚洲国产免费| 桃红色精品国产亚洲av| a在线观看视频网站| 18美女黄网站色大片免费观看| 两个人视频免费观看高清| 日韩高清综合在线| 国产亚洲精品久久久com| 男女下面进入的视频免费午夜| 757午夜福利合集在线观看| 久久久久久久亚洲中文字幕 | 夜夜爽天天搞| 偷拍熟女少妇极品色| 精华霜和精华液先用哪个| 欧美一区二区精品小视频在线| 国产极品精品免费视频能看的| 久久国产精品影院| 精品不卡国产一区二区三区| 亚洲欧美日韩东京热| 特级一级黄色大片| netflix在线观看网站| 丁香六月欧美| 黑人欧美特级aaaaaa片| 色av中文字幕| 国产一区二区激情短视频| 精品人妻1区二区| 又紧又爽又黄一区二区| 夜夜夜夜夜久久久久| 国产精品亚洲美女久久久| 熟女人妻精品中文字幕| 亚洲av一区综合| 香蕉av资源在线| 国产野战对白在线观看| 成人国产一区最新在线观看| 好男人电影高清在线观看| 国产免费一级a男人的天堂| 国产高清视频在线播放一区| 麻豆久久精品国产亚洲av| 精品国产超薄肉色丝袜足j| 精品国产超薄肉色丝袜足j| 国产高潮美女av| 香蕉久久夜色| 亚洲av电影不卡..在线观看| 国产av不卡久久| 国产一区二区三区视频了| 一级毛片高清免费大全| 内地一区二区视频在线| 18禁在线播放成人免费| 亚洲激情在线av| 国内少妇人妻偷人精品xxx网站| h日本视频在线播放| 99久久99久久久精品蜜桃| 亚洲欧美日韩高清在线视频| 欧美国产日韩亚洲一区| 欧美成人免费av一区二区三区| 三级毛片av免费| 精品一区二区三区视频在线 | 可以在线观看的亚洲视频| 久久久久九九精品影院| 性色av乱码一区二区三区2| 在线观看美女被高潮喷水网站 | av黄色大香蕉| 国产成人欧美在线观看| 18禁黄网站禁片午夜丰满| 狂野欧美白嫩少妇大欣赏| 免费在线观看成人毛片| 91麻豆av在线| 国产精品永久免费网站| 国产探花在线观看一区二区| 在线播放无遮挡| 国产精品永久免费网站| 成人无遮挡网站| 噜噜噜噜噜久久久久久91| 国产精品三级大全| 精品久久久久久久毛片微露脸| 亚洲无线在线观看| 一级a爱片免费观看的视频| 男女午夜视频在线观看| 亚洲真实伦在线观看| 国产精品综合久久久久久久免费| 欧美又色又爽又黄视频| 久久人人精品亚洲av| 无遮挡黄片免费观看| 在线观看一区二区三区| 欧美av亚洲av综合av国产av| 五月伊人婷婷丁香| 99热只有精品国产| 国产97色在线日韩免费| 俺也久久电影网| 亚洲精品影视一区二区三区av| av国产免费在线观看| 亚洲va日本ⅴa欧美va伊人久久| 亚洲avbb在线观看| 十八禁人妻一区二区| 欧美激情在线99| 母亲3免费完整高清在线观看| 成人午夜高清在线视频| 黄片大片在线免费观看| 99在线视频只有这里精品首页| av片东京热男人的天堂| 日本三级黄在线观看| 精品一区二区三区av网在线观看| 免费观看精品视频网站| 香蕉久久夜色| 日本成人三级电影网站| 日本熟妇午夜| 最新美女视频免费是黄的| 可以在线观看毛片的网站| 国产探花在线观看一区二区| 欧美最黄视频在线播放免费| 亚洲avbb在线观看| 国产视频内射| 亚洲欧美激情综合另类| 亚洲人成电影免费在线| 国产亚洲av嫩草精品影院| 成人午夜高清在线视频| 狠狠狠狠99中文字幕| 最近最新中文字幕大全免费视频| 男女之事视频高清在线观看| 香蕉丝袜av| 一个人免费在线观看电影| 夜夜爽天天搞| 亚洲国产精品成人综合色| 俄罗斯特黄特色一大片| 舔av片在线| 亚洲精品粉嫩美女一区| 国产成年人精品一区二区| 国产精品久久久人人做人人爽| 国产成年人精品一区二区| 叶爱在线成人免费视频播放| 亚洲精品影视一区二区三区av| 国产欧美日韩一区二区精品| 又爽又黄无遮挡网站| 国产色爽女视频免费观看| 国产熟女xx| av在线蜜桃| 国内少妇人妻偷人精品xxx网站| 91字幕亚洲| 午夜福利在线在线| 最新中文字幕久久久久| 国产国拍精品亚洲av在线观看 | 母亲3免费完整高清在线观看| 最后的刺客免费高清国语| 欧美日韩福利视频一区二区| 免费在线观看亚洲国产| 日本在线视频免费播放| a级一级毛片免费在线观看| 成人高潮视频无遮挡免费网站| 久久国产精品影院| av中文乱码字幕在线| 99久久精品热视频| 久久久久久久午夜电影| 欧美成人a在线观看| 国产欧美日韩一区二区精品| 精品99又大又爽又粗少妇毛片 | 一区福利在线观看| 国产高清激情床上av| 人人妻,人人澡人人爽秒播| 最好的美女福利视频网| 国产乱人伦免费视频| 久久国产精品影院| 国产成+人综合+亚洲专区| 亚洲欧美日韩高清在线视频| xxx96com| 人妻久久中文字幕网| 51国产日韩欧美| 在线观看日韩欧美| 色av中文字幕| 国产又黄又爽又无遮挡在线| 两个人看的免费小视频| 亚洲午夜理论影院| 欧美精品啪啪一区二区三区| 身体一侧抽搐| 亚洲精品成人久久久久久| 99热只有精品国产| 狠狠狠狠99中文字幕| 精品一区二区三区视频在线 | 999久久久精品免费观看国产| 国产亚洲精品久久久久久毛片| 三级男女做爰猛烈吃奶摸视频| 国产精品98久久久久久宅男小说| 小说图片视频综合网站| 美女高潮的动态| 色av中文字幕| 国产成人a区在线观看| 一级毛片高清免费大全| 国产毛片a区久久久久| 老汉色∧v一级毛片| 欧美日本亚洲视频在线播放| 久久99热这里只有精品18| 琪琪午夜伦伦电影理论片6080| 51午夜福利影视在线观看| 免费看十八禁软件| 在线观看日韩欧美| 欧美极品一区二区三区四区| 久久久久久久亚洲中文字幕 | 少妇的丰满在线观看| 免费在线观看日本一区| 一个人看的www免费观看视频| 国产男靠女视频免费网站| 亚洲精品色激情综合| 色播亚洲综合网| 亚洲成人久久爱视频| 亚洲av成人精品一区久久| 亚洲专区国产一区二区| 欧美bdsm另类| www.999成人在线观看| 床上黄色一级片| а√天堂www在线а√下载| 国产免费男女视频| 国内毛片毛片毛片毛片毛片| 免费av毛片视频| 精品一区二区三区视频在线观看免费| 在线播放国产精品三级| 最近在线观看免费完整版| 久久精品亚洲精品国产色婷小说| 亚洲精品456在线播放app | 欧美丝袜亚洲另类 | 国产亚洲欧美98| 搡老岳熟女国产| 日韩人妻高清精品专区| 12—13女人毛片做爰片一| 女人高潮潮喷娇喘18禁视频| 搡老岳熟女国产| 国产精品三级大全| 少妇裸体淫交视频免费看高清| 国产高清有码在线观看视频| 99riav亚洲国产免费| 国产真实乱freesex| 97超级碰碰碰精品色视频在线观看| 亚洲av不卡在线观看| 观看免费一级毛片| 久久久久久久久中文| 又紧又爽又黄一区二区| 欧美+亚洲+日韩+国产| 草草在线视频免费看| 欧美日韩中文字幕国产精品一区二区三区| 啪啪无遮挡十八禁网站| 麻豆国产av国片精品| 国产69精品久久久久777片| 免费在线观看日本一区| 黄色日韩在线| 国产精品久久久久久亚洲av鲁大| 美女黄网站色视频| 成人午夜高清在线视频| 在线观看日韩欧美| 两个人看的免费小视频| 老熟妇乱子伦视频在线观看| 亚洲精品在线观看二区| 老汉色av国产亚洲站长工具| 欧美xxxx黑人xx丫x性爽| 老司机福利观看| 亚洲午夜理论影院| 亚洲国产高清在线一区二区三| 日韩成人在线观看一区二区三区| 久久久久久久午夜电影| 国产精华一区二区三区| 精品日产1卡2卡| 国产精品亚洲美女久久久| 亚洲精品亚洲一区二区| 色综合亚洲欧美另类图片| 亚洲性夜色夜夜综合| 久99久视频精品免费| 夜夜躁狠狠躁天天躁| 中文亚洲av片在线观看爽| 97人妻精品一区二区三区麻豆| 成人特级av手机在线观看| 黄片大片在线免费观看| 久久久久精品国产欧美久久久| 日韩国内少妇激情av| 国产高潮美女av| 99热只有精品国产| 日本一二三区视频观看| 久久伊人香网站| 国产精品国产高清国产av| 亚洲国产精品sss在线观看| 噜噜噜噜噜久久久久久91| 18禁黄网站禁片午夜丰满| 制服人妻中文乱码| 国产精品99久久99久久久不卡| 啦啦啦免费观看视频1| 深爱激情五月婷婷| 国产淫片久久久久久久久 | 亚洲片人在线观看| 国产伦人伦偷精品视频| 免费看日本二区| 国产伦在线观看视频一区| 日韩免费av在线播放| 色吧在线观看| 久久久色成人| 两个人看的免费小视频| 亚洲专区国产一区二区| 亚洲精品在线观看二区| 国产av不卡久久| tocl精华| 一级毛片女人18水好多| 日韩人妻高清精品专区| 成人午夜高清在线视频| 久久人妻av系列| 日韩人妻高清精品专区| 欧美zozozo另类| 精品国内亚洲2022精品成人| 女人高潮潮喷娇喘18禁视频| 搡老妇女老女人老熟妇| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 久久人妻av系列| 一进一出抽搐gif免费好疼| 亚洲性夜色夜夜综合| 精品无人区乱码1区二区| 波多野结衣巨乳人妻| 一级a爱片免费观看的视频| 男女视频在线观看网站免费| 怎么达到女性高潮| 久久久久久人人人人人| 在线免费观看的www视频| a在线观看视频网站| 国产久久久一区二区三区| 黄色成人免费大全| 他把我摸到了高潮在线观看| 久久精品91无色码中文字幕| 国产久久久一区二区三区| 亚洲 欧美 日韩 在线 免费| 露出奶头的视频| 国产v大片淫在线免费观看| 黄色日韩在线| 亚洲精品在线观看二区| 在线播放无遮挡| 99视频精品全部免费 在线| 一区福利在线观看| 欧美一区二区亚洲| 国产成人aa在线观看| 波多野结衣巨乳人妻| 日本与韩国留学比较| 丝袜美腿在线中文| 国产综合懂色| 日韩免费av在线播放| 超碰av人人做人人爽久久 | 国产精品99久久久久久久久| 久久精品夜夜夜夜夜久久蜜豆| 日本 欧美在线| 中文字幕久久专区| 国产一区二区三区视频了| 亚洲内射少妇av| 亚洲精品456在线播放app | 99riav亚洲国产免费| 国产精品电影一区二区三区| 亚洲乱码一区二区免费版| 午夜亚洲福利在线播放| 色尼玛亚洲综合影院| 亚洲电影在线观看av| 久久久久久久午夜电影| 欧美中文日本在线观看视频| 国产精品99久久99久久久不卡| 长腿黑丝高跟| 成年免费大片在线观看| 黄色日韩在线| 99热只有精品国产| 很黄的视频免费| 日韩精品中文字幕看吧| 美女高潮喷水抽搐中文字幕| 日韩有码中文字幕| 久久精品国产亚洲av涩爱 | 久久精品亚洲精品国产色婷小说| av欧美777| 久久久色成人| 久久精品亚洲精品国产色婷小说| 久久精品国产自在天天线| 真人一进一出gif抽搐免费| 一级毛片高清免费大全| 3wmmmm亚洲av在线观看| 国产亚洲精品久久久com| 熟妇人妻久久中文字幕3abv| 国产伦精品一区二区三区视频9 | 尤物成人国产欧美一区二区三区| 国产高潮美女av| 欧美成人性av电影在线观看| 国内精品久久久久精免费| 国产精品久久久久久亚洲av鲁大| 岛国在线免费视频观看| 尤物成人国产欧美一区二区三区| 两个人的视频大全免费| 亚洲七黄色美女视频| 精品一区二区三区av网在线观看| 午夜福利高清视频| 欧美成人a在线观看| 窝窝影院91人妻| 国产主播在线观看一区二区| 精品欧美国产一区二区三| 国内精品一区二区在线观看| 欧美一区二区国产精品久久精品| 亚洲久久久久久中文字幕| 久久精品人妻少妇| 嫁个100分男人电影在线观看| 国产伦在线观看视频一区| 国产精品一及| 日本撒尿小便嘘嘘汇集6| 国产一区二区三区视频了| 国产精品女同一区二区软件 | 一二三四社区在线视频社区8| 蜜桃亚洲精品一区二区三区| 久久久久久九九精品二区国产| 国产精品 国内视频| 欧美高清成人免费视频www| 88av欧美| 欧美成人免费av一区二区三区| 日本黄大片高清| 偷拍熟女少妇极品色| 两个人看的免费小视频| 3wmmmm亚洲av在线观看| 欧美日韩一级在线毛片| av福利片在线观看| 欧洲精品卡2卡3卡4卡5卡区| 国产蜜桃级精品一区二区三区| 亚洲人与动物交配视频| 婷婷亚洲欧美| 乱人视频在线观看| 最新美女视频免费是黄的| 男女做爰动态图高潮gif福利片| 国产真实乱freesex| 一级黄色大片毛片| 香蕉丝袜av| 国内精品久久久久精免费| 免费av毛片视频| 一卡2卡三卡四卡精品乱码亚洲| 亚洲av成人不卡在线观看播放网| 久久精品91无色码中文字幕| 99久久99久久久精品蜜桃| 亚洲国产精品sss在线观看| 老鸭窝网址在线观看| 欧美乱色亚洲激情| 亚洲国产精品999在线| 俄罗斯特黄特色一大片| 亚洲天堂国产精品一区在线| 一级a爱片免费观看的视频| x7x7x7水蜜桃| 亚洲成人久久性| 高清毛片免费观看视频网站| 亚洲av五月六月丁香网| 亚洲内射少妇av| 99热这里只有是精品50| 99久久精品热视频| 麻豆国产av国片精品| 成人特级黄色片久久久久久久| 久久久久久久精品吃奶| 一本综合久久免费| 99在线人妻在线中文字幕| www.熟女人妻精品国产| 国产午夜精品久久久久久一区二区三区 | 黑人欧美特级aaaaaa片| 久久这里只有精品中国| 99久久精品国产亚洲精品| 不卡一级毛片| 国产亚洲精品久久久com| 国产精品1区2区在线观看.| 嫩草影院入口| 亚洲电影在线观看av| 别揉我奶头~嗯~啊~动态视频| 亚洲精品亚洲一区二区| 日本黄色片子视频| 精品久久久久久,| 欧美中文日本在线观看视频| 男女午夜视频在线观看| av片东京热男人的天堂| 国产成人系列免费观看| 国产aⅴ精品一区二区三区波| 成人特级av手机在线观看| 欧美日韩瑟瑟在线播放| 男人和女人高潮做爰伦理| 国产乱人视频| 国产成人啪精品午夜网站| 五月玫瑰六月丁香| 国产熟女xx| 一区二区三区免费毛片| 此物有八面人人有两片| 欧洲精品卡2卡3卡4卡5卡区| 久久久久久国产a免费观看| 在线十欧美十亚洲十日本专区| 青草久久国产| 亚洲国产精品sss在线观看| 久久人妻av系列| 听说在线观看完整版免费高清| 婷婷精品国产亚洲av| 亚洲成人久久爱视频| 啦啦啦免费观看视频1| 亚洲在线观看片| 丰满乱子伦码专区| 国产精品99久久99久久久不卡| 日韩有码中文字幕| 精品国产三级普通话版| 99久久精品一区二区三区| 亚洲欧美日韩卡通动漫| 一进一出抽搐gif免费好疼| 精品日产1卡2卡| 国产国拍精品亚洲av在线观看 | 国产三级黄色录像| 国产精品一区二区三区四区免费观看 | 两性午夜刺激爽爽歪歪视频在线观看| 免费观看的影片在线观看| 最好的美女福利视频网| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 性色av乱码一区二区三区2| 亚洲精品在线美女| 亚洲一区高清亚洲精品| 国产精品久久久久久精品电影| 国产伦精品一区二区三区四那| 免费人成视频x8x8入口观看| 国产乱人视频| 亚洲第一电影网av| 亚洲性夜色夜夜综合| 中文亚洲av片在线观看爽| 最新在线观看一区二区三区| 88av欧美| 亚洲国产精品合色在线| 成人性生交大片免费视频hd| 久久这里只有精品中国| 亚洲国产欧美网| 18禁裸乳无遮挡免费网站照片| 精品人妻1区二区| 久久国产精品人妻蜜桃| 少妇裸体淫交视频免费看高清| 亚洲中文字幕一区二区三区有码在线看| 欧美3d第一页| АⅤ资源中文在线天堂| 少妇高潮的动态图| 熟女人妻精品中文字幕| 99久久精品热视频| 欧美乱码精品一区二区三区| 一本久久中文字幕| 搡老岳熟女国产| 国产欧美日韩精品一区二区| 夜夜爽天天搞| 国产精品影院久久| 观看免费一级毛片| 91在线精品国自产拍蜜月 | 人妻久久中文字幕网| 女生性感内裤真人,穿戴方法视频| 国产精品99久久99久久久不卡| 老司机午夜十八禁免费视频| 久久精品国产清高在天天线| 好看av亚洲va欧美ⅴa在| 欧美成狂野欧美在线观看| www日本黄色视频网| a级一级毛片免费在线观看| 中国美女看黄片| 色在线成人网| 日日夜夜操网爽| 中国美女看黄片| 亚洲欧美日韩无卡精品| 午夜精品一区二区三区免费看| av视频在线观看入口| 欧美+亚洲+日韩+国产| 最近最新中文字幕大全电影3| 51午夜福利影视在线观看| 亚洲成人久久性| 成人18禁在线播放| 欧美性猛交╳xxx乱大交人| 精品一区二区三区人妻视频| 法律面前人人平等表现在哪些方面| 最近视频中文字幕2019在线8| 午夜福利在线观看免费完整高清在 | 日韩欧美国产在线观看| 国产黄色小视频在线观看| 日本免费一区二区三区高清不卡| 国产久久久一区二区三区| 国产精品女同一区二区软件 | 久久香蕉精品热| 91麻豆av在线| 欧美一区二区国产精品久久精品| 午夜两性在线视频| 亚洲av不卡在线观看| 精品久久久久久久毛片微露脸| 亚洲电影在线观看av| 一卡2卡三卡四卡精品乱码亚洲| 噜噜噜噜噜久久久久久91| 亚洲一区二区三区色噜噜| 亚洲人成网站在线播| 深爱激情五月婷婷| ponron亚洲| 国产综合懂色|