• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Incremental Learning Framework for Mining Big Data Stream

    2022-08-24 03:28:14AlaaEisaNoraELRashidyMohammadDahmanAlshehriHazemElbakryandSamirAbdelrazek
    Computers Materials&Continua 2022年5期

    Alaa Eisa,Nora EL-Rashidy,Mohammad Dahman Alshehri,Hazem M.El-bakry and Samir Abdelrazek

    1Information Systems Department,F(xiàn)aculty of Computers and Information,Mansoura University,Mansoura,35516,Egypt

    2Machine Learning and Information Retrieval Department,F(xiàn)aculty of Artificial Intelligence,Kafrelsheikh University,Kafr El-Sheikh,Egypt

    3Department of Computer Science,College of Computers and Information Technology,Taif University,Taif,21944,Saudi Arabia

    Abstract:At this current time,data stream classification plays a key role in big data analytics due to its enormous growth.Most of the existing classification methods used ensemble learning, which is trustworthy but these methods are not effective to face the issues of learning from imbalanced big data, it also supposes that all data are pre-classified.Another weakness of current methods is that it takes a long evaluation time when the target data stream contains a high number of features.The main objective of this research is to develop a new method for incremental learning based on the proposed ant lion fuzzy-generative adversarial network model.The proposed model is implemented in spark architecture.For each data stream,the class output is computed at slave nodes by training a generative adversarial network with the back propagation error based on fuzzy bound computation.This method overcomes the limitations of existing methods as it can classify data streams that are slightly or completely unlabeled data and providing high scalability and efficiency.The results show that the proposed model outperforms stateof-the-art performance in terms of accuracy (0.861) precision (0.9328) and minimal MSE(0.0416).

    Keywords: Ant lion optimization(ALO);big data stream;generative adversarial network(GAN);incremental learning;renyi entropy

    1 Introduction

    In the advanced digital world, the data streams are generated by different sources, like sensors,social media networks,and internet of things(IoT)devices which are rapidly growing[1,2].These data streams are characterized based on the changes in data distribution and high velocity with respect to time.As such,several research works are concentrated more on the issues of data stream classification,especially in the non-stationary data.The main issue in the classification process is the utilization of various concept drifts during the changes of data distribution based on time in unforeseen ways[3–6].

    The data stream is the manifestation of big data that is characterized by five dimensions (5 V),namely value,variety,veracity,velocity,and volume.Data stream mining is the methodology used to deal with the data analysis of a huge volume of data samples in an ordered sequence[7–13].Incremental learning follows the paradigm of machine learning methods.In this technique,the learning procedure can take place only when new examples appear and then adjusts to the one that is gained from previous examples.The ensemble learning model uses multiple base learners to integrate the predictions[14–19].

    The large volume of data sequence leads to the need for data analysis techniques for special purposes,as it does not require recording the entire data stream in memory[20–23].A method adopted for analyzing data streams explores the incremental production of informative patterns.This pattern signifies a synthesized vision of data records that were analyzed in the past and it progressively analysis for the availability of new records.On-line and incremental learning methods are used for dealing with the rapid arrival of continuous data,unbounded streams,and time-varying data[24–26].Online environment is a non-stationary one that copes with the learning issues in big data conditions [27].Moreover, learning frames are constant to develop more effective feature expression and renewal models.The prediction model requires several parameters and this model can always be rebuilt[28,29].

    The main contribution of this paper can be summarized as follow:

    ·The authors propose an incremental learning framework for big data stream using ant lion fuzzy-generative adversarial network model (ALF-GAN) that provide speed, high efficiency,good convergence,and eliminates local optima.

    ·The proposed model is carried out in spark architecture that provides high scalability[30], in which master node and slave nodes are considered[31].

    ·The authors use a tri-model for the feature extraction process,which includes token keyword set,semantic keyword set,and contextual keyword set.

    ·The authors use renyi entropy for features selection that decreases over-fitting,reduces training time,and improved accuracy.

    ·The proposed framework is compared to other state-of-the-art methods using standard criteria.

    ·The authors explore the limitations of the current literature on big data stream mining techniques.

    The paper is organized as follows:Section 2 describes the review of various data classification methods.Section 3 shows materials, methods and elaborates the proposed ALF-GAN model,Section 4 presents the results and discussion,the paper concluded in Section 5.

    2 Related Works

    Most of the existing data stream classification methods used ensemble learning methods due to their flexibility in updating the classification scheme, like retraining, removing, and adding the constituent classifiers[32–35].Most of these methods are trustworthy than the single classifier schemes particularly in the non-stationary environments [5,36].The dynamic weighted majority (DWM)effectively maintains the ensemble of the classifier with the weighted majority vote model.The DWM dynamically generates and alters the classifiers with respect to concept drifts [37].In the case of a classifier that misclassifies the instance, the weight is reduced to a certain value that disregards the output of the ensemble.The classifier with a weight less than the threshold value is removed from the ensemble[38–40].

    Gupta et al.[41] introduced a scale free-particle swarm optimization (SF-PSO) and multi-class support vector machine(MC-SVM)for data classification.Features selected to minimize time complexity and to enhance the accuracy of classification.The authors validated their approach using six different high dimensional datasets but they failed to use the benefit of particle’s learning mechanism.Lara-Benítez et al.[42]introduced an asynchronous dual-pipeline deep learning framework for data streaming(ADLStream).Training and testing process were simultaneously performed under different processes.The data stream was passed into both the layers in which quick prediction was concurrently offered and the deep learning model was updated.This method reduced processing time and obtained high accuracy.Unfortunately,this technique was not suitable if the label for each class instance was not immediately available.Casalino et al.[10] introduced a dynamic incremental semi-supervised fuzzy c-means for data stream classification (DISSFCM).The authors assumed that some of the labeled data that belongs to various classes were available in the form of chunks over time.The semisupervised model was used to process each chunk that effectively achieved cluster-based classification.The authors increased the quality of classification by partitioning the data into clusters.They failed to integrate small-sized clusters,as they may hamper the cluster quality in terms of interpretability and structure.Ghomeshi et al.[5]introduced an ensemble method for various concept drifts in the process of data stream classification based on particle swarm optimization (PSO) and replicator dynamics algorithm(RD).This method was based on three-layer architecture that generated classification types with varying size.Each of the classification selected some amount of features from number of features in target data stream.This method takes long evaluation time when target data stream contains high number of features.

    Yu et al.[29] introduced a combination weight online sequence extreme learning machine(CWEOS-ELM)for data stream classification.This method was evaluated based on the correlation and changing test error.The original weight value was determined using the adaboost algorithm.It forecast the performance using adaptable weight and with various base learners.It doesn’t require any user intervention and the learning process was dynamic.This method was more effective but failed to predict for an increasing number of feature space.Gomes et al.[40]introduced an adaptive random forest(ARF)model for the classification of data streams.The authors generally increase the decision tree by training the original data in the re-sampled versions and thereby few features are randomly selected at each node.This method contains adaptive operators and a resampling model that cope with various concept drifts for various datasets.This method was accurate and used a feasible amount of resources.The authors failed to analyze the run-time performance by reducing the number of detectors.Dai et al.[43] introduced a distributed deep learning model for processing big data.They offered distributed training on the computed model of the data system.This method faced large overheads due to the adaptation layer among different schemes.Sleeman et al.[44] introduced an ensemblebased model for extracting the instance level characteristics by analyzing each class.It can learn the data from large-sized skewed datasets with different classes.The authors failed to solve the multi-class imbalanced issues in big data,like extreme class imbalance and class overlapping.Tab.1 summarizes the studies undertaken for review:

    Table 1:Summary of the studies undertaken for a review

    Table 1:Continued

    Some of the issues faced by the existing data classification methods are explained as follows:

    ·The existing methods are not effective to face the issues of learning from imbalanced big data, as they are designed for small-sized datasets.Classification tasks can be more scalable by combining the data with high-performance computing architecture[44–47].

    ·In data stream classification, it is not pragmatic to suppose that all data are pre-classified.Therefore, there is a need to focus on classifying stream data that are slightly or completely unlabeled[48].

    ·The increasing rates of big data and the curse of its dimensionality produced difficult tasks in data classification[41,49].

    ·The adaptation to various concept drifts poses a great challenge for data steam classification when data distribution evolves with respect to time[50,51].

    3 Methods and Materials

    3.1 Soft Computing Techniques

    Soft computing(SC)techniques are an assemblage of intelligent,adjustable,and flexible problemsolving methods[52].They are used in modeling complex real-world problems to achieve tractability,robustness, low solution cost.The singular characteristic of all SC techniques is their capacity for self-tuning, that is, they infer the power of generalization from approximating and learning from experimental data.SC techniques can be divided into five categories which are (machine learning,neural networks,evolutionary computation,fuzzy logic,and probabilistic reasoning)[53,54].For more reading about SC,Sharma,S,et al.provide a comprehensive review and analysis of supervised learning(SL)and SC techniques for stress diagnosis in humans.They explore the strengths and weaknesses of different SL(support vector machine,nearest neighbors,random forest,and bayesian classifier)and SC(deep learning,fuzzy logic,and nature-inspired)[55]

    3.2 Spark Architecture Based Big Data Stream Approach

    This section describes the big data streaming approach using the proposed ALF-GAN in spark architecture.Most of the classification process requires a complete dataset to be loaded in memory before starting the process[56].For online data classification,where the dataset size is extremely large,there is a need for scalability which is supported by spark architecture.The process of incremental learning is performed by considering the input data as big data.The proposed incremental learning process includes the phases,like pre-processing,feature extraction,feature selection,and incremental learning which are progressed in spark architecture.Let us considernthe number of input dataBto be passed to thennumber of slave nodes in order to perform pre-processing and feature extraction processes.From slave nodes, the extracted features are fed to the master node to achieve feature selection and the incremental learning process.Finally,the class output for each input of big data is computed at slave nodes.Fig.1 represents the schematic diagram of the proposed ALF-GAN model on spark architecture.

    Figure 1:Schematic diagram of proposed ALF-GAN in spark architecture

    3.3 Pre-Processing of Input Big Data

    The input data considered to perform the learning process is defined as,

    where?denotes the database,nindicates the total number of data,andBirepresentsithinput big data.The data pre-processing phase involves the following steps:

    1. Stop word removal:it reduced the text count and enables to increase the system’s performance.

    2. Stemming:it is used to minimize variant word forms into a common representation termed as root.

    3. Tokenization:it is the exploration of words in the sentence.The key role of tokenization is to identify meaningful words.The pre-processed data is represented asD.

    3.4 Feature Extraction Using Tri-Model

    The pre-processed dataDis passed to the feature extraction phase,where features are extracted using a tri-model.It is designed by considering the techniques, like token keyword set, semantic keyword set,and contextual keyword set[57,58].

    Token keyword set:it represents the word with a definite meaning.A paragraph may contain up to six tokens in the keyword set.

    Semantic keyword set:in this phase word dictionary with two semantic relations is built.It represented as,

    where,drepresents keywords,bandcare the synonym and hyponym word.

    Contextual keyword set:it identifies the related words by eliminating the word from irrelevant documents.It identifies the context terms and semantic meaning to form relevant context.Key terms are the initial indicators and the context terms are specified as validators to find whether the key terms are the indicators.Let us consider the training data asD,key term asDkt,and the context term asDct.

    Identification of key term:let us consider the language model asMsuch that for each term,the keyword measure is computed as,

    whereMrelis relevant document andMnon-relrepresents the non-relevant document in the language model.

    Identification of context term:for each key term, the contextual terms are needed to compute separately.The key term instances for both the relevant and irrelevant documents are computed and the sliding windowWis applied to extract the key term aroundD.The relevant terms are denoted asarel,whereas the non-relevant terms are specified asanon-rel,respectively.For each unique term,a score is computed using the below equation,

    whereMs(rel)denotes the language model for the set of relevant documents,Ms(non-rel)indicates the language model for the set of non-relevant documents.Thus, the features extracted using tri-model are represented asf,which is given as the input to the feature selection phase.

    3.5 Feature Selection Using Renyi Entropy

    After extracting the features from big data,feature selection becomes essential as it is complicated to mine and transform the huge volume of data into valuable insights [59].The unique and the important features are selected using renyi entropy measure.It is defined as the generalization of shannon entropy that depends on the parameterris given as,

    The features selected using the entropy measure are represented asRwith the dimension of[U×V].

    3.6 Incremental Learning Using Ant Lion Fuzzy-Generative Adversarial Network

    Once,features are selected,the process of incremental learning is accomplished using the proposed ALF-GAN model.The significance of incremental learning is that while adding new samples,it does not require retraining all the samples, and hence it reduces the cost of training time and memory consumption.

    The learning steps of the proposed ALF-GAN are presented in Fig.2.Initially, the input data is considered asStand the new chunk data is given asSt+1.Both the dataStandSt+1are used to be trained by GAN with the back propagation errorQtand the resulted output is declared as a predicted class.The error of new chunk dataQt+1is checked with the error of initial dataQt.IfQt+1is less thanQt,the GAN will be trained att,otherwise the fuzzy bound is computed based on range modification degree (RMD).After computing the fuzzy bound, the new training process is carried out using the proposed ALF-GAN to get a new GAN.

    Figure 2:The proposed incremental learning model

    3.6.1 Architecture of GAN

    The architecture of GAN is represented in Fig.3.GAN[60]is an efficient network to learn the generative model from the unlabeled data.The benefit of using GAN is its ability to generate accurate and sharp distributions,and it does not require any form of functionality for training the generator network.GAN is composed of two models,namely the generatorHand the discriminatorCmodel,respectively.Let us consider the inputHas the random noiseA= {A1,A2,...,Ak}.Thus,the output obtained fromHis synthetic samplesH(A) = {H(A1),H(A2),...,H(Ak)}.The input passed toCisH(A)orR,which is the selected features obtained from the feature selection phase.The aim ofCis to find the real or fake samples.The loss function is represented as,

    Figure 3:Architecture of generative adversarial network

    3.6.2 Fuzzy Ant Lion Optimization Algorithm

    The algorithmic steps of the proposed ALF-GAN are explained as follows:

    i) Initialization:let us consider the weights are initialized randomly.

    ii) Error estimation:after computing the loss function of GAN, the error valueQtis measured based on the ground truth value and the loss function of GAN using the below equation as,

    iii) Fuzzy bound computation:when a new dataSt+1is applied to the network, the errorQt+1is computed.IfQt+1is greater thanQtFuzzy bounding model used to bound the weights based on RMD,which is given as,

    where,XtandXt+1are the weight estimated attandt+1.The new bounding weight is computed as,

    where,Edenotes fuzzy bound, andXtrepresents weight vector, which is to be updated using ALO.The fuzzy bound is computed using triangular membership functionβ,

    where,ρdenotes fuzzy bound threshold andβdefines the triangular membership function with some parameters,a,g,h,andw.

    iv)Update weights:ALO algorithm[61–63]is used to select the optimal weight.

    v)Termination:steps are repeated until the best solution is obtained.Algorithm 1 represents

    the pseudo-code of the proposed ALF-GAN.

    Algorithm 1:Pseudocode of proposed incremental learning algorithm 1Input:R,Tj 2Output:Xt 3Initialize the weights 4Estimate error(Qt)for instance St from Eq.(7)5When a new instance St+1 add,estimate error(Qt+1)using Eq.(7)6While end criteria are not satisfied(Qt >Qt+1)7Updating weights using ALO and Fuzzy bound 8End while 9Return optimal weight 10 Terminate

    3.7 Dataset

    In this section the authors discuss the datasets used in the implementation process of the proposed framework:

    · WebKB dataset[64]:this dataset is collected from Mark craven’s website.It consists of web pages and hyperlinks from different departments of computer science,namely the University of Texas,University of Wisconsin,University of Washington,and Cornell University.

    · 20 Newsgroup dataset[65]:it is the popular dataset considered for the purpose of experiments that includes text applications, like text clustering and text classification.Newsgroup dataset includes 20,000 newsgroup documents that are equally partitioned into 20 various newsgroups.

    · Reuter dataset[66]:the total number of instances available in the dataset is 21578 without any missing values.The task used for the association purpose is classification and the attributes are ordered based on their categories.The characteristic features of the dataset are text.

    4 Results and Discussion

    The experiments carried out with the proposed ALF-GAN and result acquired to prove the effectiveness of this model is presented in this section.

    4.1 Experimental Setup

    The proposed ALF-GAN is developed in the PYTHON tool using WebKB,20 Newsgroup,and Reuter datasets to conduct numerical experiments for evaluating the effectiveness of the ALF-GAN model.

    4.2 Performance Metrics

    The performance of the proposed ALF-GAN is analyzed by considering the metrics,like accuracy,MSE,and precision which are shown in Tab.2.

    Table 2:Evaluation metrics

    4.3 Comparative Methods

    The competitive methods used for analyzing the performance of the proposed model are scalefree particle swarm optimization(SF-PSO)[41],asynchronous dual-pipeline deep learning framework for data streaming (ADLStream) [42], and dynamic incremental semi-supervised fuzzy c-means(DISSFCM)[10].

    4.4 Comparative Analysis

    The analysis made to show the effectiveness of the proposed framework by considering three different datasets is presented in this section.

    4.4.1 Analysis Using 20 Newsgroup Dataset

    Tab.3 and Fig.4 demonstrate and depict the analysis made using the proposed ALF-GAN based on the Newsgroup dataset.The analysis made with accuracy metric is shown in Fig.4a.By considering the chunk size as 2, accuracy measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.6389, 0.6683, and 0.6992, while the proposed ALF-GAN computed the accuracy of 0.7068,respectively.When the chunk size is considered as 3,the accuracy obtained by the existing SF-PSO,ADLStream,and DISSFCM is 0.7008,0.7114,and 0.7224,whereas the proposed ALF-GAN obtained higher accuracy of 0.72971, respectively.By considering the chunk size as 4, accuracy measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.7638, 0.7646, and 0.772758, while the proposed ALF-GAN computed the accuracy of 0.7774,respectively.When the chunk size is considered as 5,the accuracy of traditional SF-PSO,ADLStream,and DISSFCM is 0.8288,0.8290,and 0.8311,while the proposed ALF-GAN achieved the accuracy of 0.8413,respectively.

    Table 3:Analysis using 20 newsgroup dataset

    The analysis carried out with MSE metric is portrayed in Fig.4b.By considering the chunk size as 2, MSE measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.6652, 0.5861,and 0.2202, while the proposed ALF-GAN computed the MSE of 0.1463, respectively.The MSE measured by the conventional SF-PSO,ADLStream,and DISSFCM by considering the chunk size 3 is 0.3035,0.1700,and 0.1164,while the proposed ALF-GAN computed the MSE of 0.1145,respectively.By considering the chunk size as 4, MSE measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.1528,0.1390,and 0.0848,while the proposed ALF-GAN computed the MSE of 0.0703,respectively.When the chunk size is considered as 5, MSE measured by the conventional SF-PSO,ADLStream,and DISSFCM is 0.0724,0.0544,and 0.0479,while the proposed ALF-GAN computed the MSE of 0.0416,respectively.

    The analysis made with the precision metric is depicted in Fig.4c.By considering the chunk size as 2,precision computed by the conventional SF-PSO,ADLStream,and DISSFCM is 0.76843,0.7865,and 0.8065,while the proposed ALF-GAN computed the precision of 0.8109,respectively.By considering the chunk size as 3,precision computed by the conventional SF-PSO,ADLStream,and DISSFCM is 0.8165,0.8229,and 0.8295,while the proposed ALF-GAN computed the precision of 0.83419,respectively.The precision obtained by the existing SF-PSO,ADLStream,and DISSFCM is 0.86312,0.8635,and 0.8682,while the proposed ALF-GAN has the precision of 0.88099 for chunk size 4.By considering the chunk size as 5,precision computed by the conventional SF-PSO,ADLStream,and DISSFCM is 0.9053,0.9055,and 0.9066,while the proposed ALF-GAN computed the precision of 0.91225,respectively.

    4.4.2 Analysis Using Reuter Dataset

    Tab.4 and Fig.5 demonstrate and depict the analysis made using the proposed ALF-GAN based on the Reuter dataset.The analysis made with accuracy metric is shown in Fig.5a.By considering the chunk size as 2, accuracy measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.6929, 0.6961, and 0.7067, while the proposed ALF-GAN computed the accuracy of 0.72193,respectively.The accuracy of existing SF-PSO, ADLStream, and DISSFCM is 0.7047, 0.7144, and 0.7291, while the proposed ALF-GAN measure the accuracy of 0.7426 for chunk size 3.When the chunk size is considered as 4, the accuracy of existing SF-PSO, ADLStream, and DISSFCM is 0.781107, 0.7817, and 0.7903, while the proposed ALF-GAN computed the accuracy of 0.7907,respectively.By considering the chunk size as 5, accuracy measured by the conventional SF-PSO,ADLStream,and DISSFCM is 0.8383,0.8471,and 0.8489,while the proposed ALF-GAN computed the accuracy of 0.8610,respectively.

    Figure 4:Analysis of ALF-GAN using 20 Newsgroup dataset,a)accuracy,b)MSE,c)precision

    Table 4:Analysis using reuter dataset

    Table 4:Continued

    The analysis carried out with MSE metric is portrayed in Fig.5b.When the chunk size is considered as 2,MSE measured by traditional SF-PSO,ADLStream,and DISSFCM is 0.609,0.4359,and 0.3585, whereas the proposed ALF-GAN measured the MSE of 0.356, respectively.The MSE measured by SF-PSO,ADLStream,and DISSFCM is 0.3384,0.2357,and 0.14539,while the proposed ALF-GAN computed the MSE of 0.1441, for chunk size 3.By considering the chunk size as 4,MSE measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.2247, 0.13548, and 0.09774,while the proposed ALF-GAN computed the MSE of 0.0611,respectively.When the chunk size is considered as 5,MSE obtained by the existing SF-PSO,ADLStream,and DISSFCM is 0.0907,0.07093,and 0.0668,whereas proposed ALF-GAN obtained the MSE of 0.0202,respectively.

    The analysis made with the precision metric is depicted in Fig.5c.By considering the chunk size as 2,precision computed by the conventional SF-PSO,ADLStream,and DISSFCM is 0.8078,0.8097,and 0.8169, while the proposed ALF-GAN computed the precision of 0.82023, respectively.When the chunk size is considered as 3,precision computed by the conventional SF-PSO,ADLStream,and DISSFCM is 0.8214,0.8276,and 0.8365,while the proposed ALF-GAN computed the precision of 0.8449, respectively.The precision achieved by the existing SF-PSO, ADLStream, and DISSFCM is 0.87453, 0.8749, and 0.8799, while the proposed ALF-GAN achieved the precision of 0.8900 for chunk size 4.By considering the chunk size as 5, precision computed by the conventional SFPSO, ADLStream, and DISSFCM is 0.9111, 0.91603, and 0.91703, while the proposed ALF-GAN computed the precision of 0.9382,respectively.

    4.4.3 Analysis Using WebKB Dataset

    Tab.5 and Fig.6 demonstrate and depict the analysis made using the proposed ALF-GAN based on the WebKB dataset.The analysis made with accuracy metric is shown in Fig.6a.By considering the chunk size as 2,accuracy measured by the conventional SF-PSO,ADL Stream,and DISSFCM is 0.48139, 0.4948, and 0.6687, while the proposed ALF-GAN computed the accuracy of 0.7442,respectively.When the chunk size is considered as 3,accuracy measured by the conventional SF-PSO,ADLStream,and DISSFCM is 0.48039,0.5065,and 0.6917,while the proposed ALF-GAN computed the accuracy of 0.75150, respectively.The accuracy achieved by existing SF-PSO, ADLStream, and DISSFCM for chunk size 4 is 0.4832,0.49754,and 0.6293,while the proposed ALF-GAN computed the accuracy of 0.75038, respectively.By considering the chunk size as 5, accuracy measured by the conventional SF-PSO,ADLStream,and DISSFCM is 0.4857,0.5081,and 0.70018,while the proposed ALF-GAN computed the accuracy of 0.7625,respectively.

    Figure 5:Analysis of ALF-GAN using Reuter dataset,a)accuracy,b)MSE,c)precision

    Table 5:Analysis using WebKB dataset

    Table 5:Continued

    Figure 6:Analysis of ALF-GAN using WebKB dataset,a)accuracy,b)MSE,c)precision

    The analysis carried out with MSE metric is portrayed in Fig.6b.By considering the chunk size as 2, MSE measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.3404, 0.23806,and 0.2086, while the proposed ALF-GAN computed the MSE of 0.2018, respectively.When the chunk size is considered as 3, MSE achieved by existing SF-PSO, ADLStream, and DISSFCM is 0.2531,0.2505,and 0.1422,whereas proposed ALF-GAN computed the MSE of 0.1146.The MSE of SF-PSO,ADLStream,and DISSFCM is 0.3158,0.2566,and 0.2211,while the proposed ALF-GAN computed the MSE of 0.1046 for chunk size 4.By considering the chunk size as 5, MSE measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.3108, 0.2361, and 0.1323, while the proposed ALF-GAN computed the MSE of 0.0835,respectively.

    The analysis made with the precision metric is depicted in Fig.6c.By considering the chunk size as 2,precision computed by the conventional SF-PSO,ADLStream,and DISSFCM is 0.4832,0.4844,and 0.4914,while the proposed ALF-GAN computed the precision of 0.49754,respectively.When the chunk size is considered as 3,the precision measured by existing SF-PSO,ADLStream,and DISSFCM is 0.48139,0.4858,and 0.49488,whereas the proposed ALF-GAN computed the precision of 0.49775.By considering the chunk size as 4, precision computed by the conventional SF-PSO, ADLStream,and DISSFCM is 0.4857,0.4865,and 0.4946,while the proposed ALF-GAN computed the precision of 0.5081, respectively.The precision obtained by existing SF-PSO, ADLStream, and DISSFCM is 0.4803, 0.4828, and 0.5065, while the proposed ALF-GAN computed the precision of 0.51388 for chunk size 5.

    These results show the higher performance of the proposed framework and prove that it provides higher accuracy and precision than comparative methods with lower MSE.

    4.5 Comparative Discussion

    Tab.6 portrays a comparative discussion of the proposed ALF-GAN model.From the below table,it is clearly showed that the proposed ALF-GAN model obtained better performance with the Reuter dataset for the metrics of accuracy,MSE,and precision.By considering the chunk size as 5,accuracy measured by the conventional SF-PSO,ADLStream,and DISSFCM is 0.8383,0.8471,and 0.8489,while the proposed ALF-GAN computed the accuracy of 0.8610,respectively.By considering the chunk size as 5, MSE measured by the conventional SF-PSO, ADLStream, and DISSFCM is 0.0907,0.07093,and 0.0668,while the proposed ALF-GAN computed the MSE of 0.0202,respectively.By considering the chunk size as 5,precision computed by the conventional SF-PSO,ADLStream,and DISSFCM is 0.91119,0.91603,and 0.91703,while the proposed ALF-GAN computed the precision of 0.9382,respectively.

    Table 6:Comparative discussion

    5 Conclusion

    This paper provides a framework for big data stream classification in spark architecture using ALF-GAN.The proposed model achieved many merits such as scalability through using spark architecture.The incremental learning model provides high accuracy and has the ability to deal with the rapid arrival of continuous data.It uses GAN that can classify data streams that are slightly or completely unlabeled data.Renyi entropy is used to select features that decrease over-fitting,reduces training time,and improved accuracy.ALO algorithm provides speed,high efficiency, good convergence,and eliminate local optima The results showed that the proposed ALF-GAN obtained maximal accuracy which is 0.8610,precision is 0.9382,and minimal MSE value of 0.0416.The future work of research would be the enhancement of classification performance by considering some other optimization methods.

    Funding Statement:Taif University Researchers Supporting Project Number(TURSP-2020/126),Taif University,Taif,Saudi Arabia.

    Conflicts of Interest:The authors declare that there is no conflict of interest regarding the publication of the paper.

    9191精品国产免费久久| av网站在线播放免费| 国产一区二区三区综合在线观看| 韩国av在线不卡| 国产一区亚洲一区在线观看| 亚洲久久久国产精品| 亚洲欧美中文字幕日韩二区| 国产黄频视频在线观看| 国产淫语在线视频| 黄色毛片三级朝国网站| 亚洲精品,欧美精品| 1024香蕉在线观看| 老司机影院成人| 男女高潮啪啪啪动态图| 老鸭窝网址在线观看| 亚洲国产毛片av蜜桃av| 男人爽女人下面视频在线观看| 人妻一区二区av| 老女人水多毛片| 国产日韩一区二区三区精品不卡| 男女啪啪激烈高潮av片| 日韩,欧美,国产一区二区三区| 午夜激情av网站| 日韩精品有码人妻一区| 成人免费观看视频高清| 狠狠精品人妻久久久久久综合| 免费观看性生交大片5| 日本色播在线视频| 18禁国产床啪视频网站| 国产又爽黄色视频| www.自偷自拍.com| 丝袜脚勾引网站| 999精品在线视频| 国产成人一区二区在线| 蜜桃国产av成人99| 久久青草综合色| 日本wwww免费看| 日韩,欧美,国产一区二区三区| 伊人久久大香线蕉亚洲五| av国产久精品久网站免费入址| 99久久中文字幕三级久久日本| 99精国产麻豆久久婷婷| 国产深夜福利视频在线观看| 青春草亚洲视频在线观看| 大香蕉久久成人网| 国产精品一区二区在线观看99| 久久精品国产鲁丝片午夜精品| 五月天丁香电影| 2018国产大陆天天弄谢| 国产福利在线免费观看视频| 一区二区日韩欧美中文字幕| 九色亚洲精品在线播放| 亚洲国产精品一区三区| 天天操日日干夜夜撸| 人妻 亚洲 视频| 少妇熟女欧美另类| 男女国产视频网站| 国产精品偷伦视频观看了| 国产欧美亚洲国产| 天天躁夜夜躁狠狠躁躁| 亚洲婷婷狠狠爱综合网| 日韩大片免费观看网站| 久久久亚洲精品成人影院| 国产精品不卡视频一区二区| 中文字幕制服av| 亚洲精品国产av成人精品| 色哟哟·www| 久久精品久久久久久噜噜老黄| 一级毛片黄色毛片免费观看视频| 自线自在国产av| 亚洲婷婷狠狠爱综合网| 18禁裸乳无遮挡动漫免费视频| 久久久亚洲精品成人影院| 日本黄色日本黄色录像| 老熟女久久久| 又黄又粗又硬又大视频| 黑人猛操日本美女一级片| 91aial.com中文字幕在线观看| 久久亚洲国产成人精品v| 晚上一个人看的免费电影| 人人澡人人妻人| 久久久久久久久久久免费av| 老熟女久久久| 丰满少妇做爰视频| 中文字幕人妻熟女乱码| 满18在线观看网站| 国产在线视频一区二区| 男女午夜视频在线观看| 亚洲精品中文字幕在线视频| av国产久精品久网站免费入址| 国产精品三级大全| 一本—道久久a久久精品蜜桃钙片| 如日韩欧美国产精品一区二区三区| 亚洲久久久国产精品| 丝袜喷水一区| 婷婷色av中文字幕| 精品少妇内射三级| 国产亚洲午夜精品一区二区久久| 婷婷色综合大香蕉| 成人毛片a级毛片在线播放| 国产精品av久久久久免费| 欧美变态另类bdsm刘玥| 久热久热在线精品观看| 男人爽女人下面视频在线观看| 色视频在线一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91| 街头女战士在线观看网站| 天天躁夜夜躁狠狠久久av| 日韩熟女老妇一区二区性免费视频| av视频免费观看在线观看| 精品亚洲成国产av| 999久久久国产精品视频| 亚洲国产看品久久| 日韩精品免费视频一区二区三区| 国产爽快片一区二区三区| 国产一区二区三区av在线| 波野结衣二区三区在线| 两个人免费观看高清视频| 国产精品麻豆人妻色哟哟久久| 18禁观看日本| 亚洲精品在线美女| 韩国高清视频一区二区三区| 黄色 视频免费看| 亚洲一区中文字幕在线| 咕卡用的链子| 精品第一国产精品| 欧美精品av麻豆av| 制服人妻中文乱码| 日韩av在线免费看完整版不卡| 亚洲男人天堂网一区| 精品亚洲成国产av| 一级毛片我不卡| 午夜久久久在线观看| 欧美日韩视频精品一区| 在线观看免费日韩欧美大片| 交换朋友夫妻互换小说| 老司机影院毛片| 国产在线视频一区二区| 久久久欧美国产精品| 久久精品aⅴ一区二区三区四区 | 最黄视频免费看| 亚洲av国产av综合av卡| 1024视频免费在线观看| 国产福利在线免费观看视频| 亚洲成色77777| 国产毛片在线视频| 欧美变态另类bdsm刘玥| 一区二区日韩欧美中文字幕| 国产又爽黄色视频| 男女无遮挡免费网站观看| 一级爰片在线观看| 久久精品国产鲁丝片午夜精品| 日韩中文字幕欧美一区二区 | 欧美97在线视频| 久久久久人妻精品一区果冻| 国产激情久久老熟女| 老鸭窝网址在线观看| 亚洲成av片中文字幕在线观看 | 欧美精品亚洲一区二区| 丰满乱子伦码专区| 制服诱惑二区| 久久女婷五月综合色啪小说| 国产色婷婷99| 亚洲精品中文字幕在线视频| 亚洲欧美日韩另类电影网站| 我的亚洲天堂| 一本色道久久久久久精品综合| 欧美精品人与动牲交sv欧美| 一级爰片在线观看| 人人澡人人妻人| 啦啦啦中文免费视频观看日本| 国产精品女同一区二区软件| 国产成人免费观看mmmm| 免费看av在线观看网站| 亚洲欧洲精品一区二区精品久久久 | videossex国产| 一级片'在线观看视频| 亚洲一区二区三区欧美精品| 女性生殖器流出的白浆| 精品亚洲成a人片在线观看| 亚洲欧美一区二区三区国产| 亚洲av电影在线观看一区二区三区| 熟女少妇亚洲综合色aaa.| 日韩av在线免费看完整版不卡| 久久久欧美国产精品| av线在线观看网站| 97精品久久久久久久久久精品| 多毛熟女@视频| 国产国语露脸激情在线看| 香蕉丝袜av| 精品国产一区二区三区四区第35| 免费看av在线观看网站| 三级国产精品片| 水蜜桃什么品种好| 精品久久久精品久久久| 欧美国产精品一级二级三级| 看非洲黑人一级黄片| 母亲3免费完整高清在线观看 | 国产亚洲av片在线观看秒播厂| 久久久久国产精品人妻一区二区| 亚洲国产色片| 好男人视频免费观看在线| 一边亲一边摸免费视频| 亚洲美女视频黄频| 七月丁香在线播放| 日本色播在线视频| 男女边摸边吃奶| 久久女婷五月综合色啪小说| 国产又色又爽无遮挡免| 两个人免费观看高清视频| 久久久a久久爽久久v久久| 国产精品久久久久久精品古装| 国产精品嫩草影院av在线观看| av国产精品久久久久影院| 欧美中文综合在线视频| 亚洲精品久久久久久婷婷小说| 精品国产乱码久久久久久男人| 大话2 男鬼变身卡| 十八禁网站网址无遮挡| 国产午夜精品一二区理论片| 国产乱人偷精品视频| 一级a爱视频在线免费观看| 在线 av 中文字幕| 亚洲精品第二区| 午夜福利视频精品| 亚洲精品av麻豆狂野| 日本爱情动作片www.在线观看| 爱豆传媒免费全集在线观看| 91午夜精品亚洲一区二区三区| 亚洲中文av在线| 宅男免费午夜| 久久久久久久国产电影| 97在线人人人人妻| 国产亚洲一区二区精品| 免费在线观看视频国产中文字幕亚洲 | 美女高潮到喷水免费观看| 日本91视频免费播放| 99久久精品国产国产毛片| 国产片内射在线| 精品第一国产精品| 一本色道久久久久久精品综合| 日韩精品有码人妻一区| 亚洲av中文av极速乱| 欧美激情高清一区二区三区 | 一本色道久久久久久精品综合| 丝袜人妻中文字幕| av网站免费在线观看视频| 亚洲熟女精品中文字幕| 国产极品天堂在线| 亚洲精品aⅴ在线观看| 肉色欧美久久久久久久蜜桃| 精品酒店卫生间| 午夜免费男女啪啪视频观看| 国产亚洲精品第一综合不卡| h视频一区二区三区| 亚洲图色成人| 熟妇人妻不卡中文字幕| 欧美 日韩 精品 国产| 国产黄色视频一区二区在线观看| a级毛片黄视频| 国产精品久久久久成人av| 校园人妻丝袜中文字幕| 久久 成人 亚洲| 秋霞在线观看毛片| 国产在线一区二区三区精| 18+在线观看网站| 大香蕉久久成人网| 黄片无遮挡物在线观看| 天堂中文最新版在线下载| 久久狼人影院| 黄网站色视频无遮挡免费观看| 乱人伦中国视频| 九九爱精品视频在线观看| 老熟女久久久| 黄色 视频免费看| 建设人人有责人人尽责人人享有的| 18禁动态无遮挡网站| 亚洲av免费高清在线观看| 国产1区2区3区精品| 国产精品久久久久久精品古装| 国产成人欧美| 99久久人妻综合| 午夜精品国产一区二区电影| 久久久久久免费高清国产稀缺| 亚洲色图 男人天堂 中文字幕| 黄色 视频免费看| 午夜日韩欧美国产| 国产欧美日韩综合在线一区二区| 国产成人精品在线电影| 午夜福利乱码中文字幕| 久久鲁丝午夜福利片| av免费观看日本| www.熟女人妻精品国产| 菩萨蛮人人尽说江南好唐韦庄| 亚洲精品国产色婷婷电影| 人人澡人人妻人| 黄色怎么调成土黄色| 国产人伦9x9x在线观看 | 欧美激情极品国产一区二区三区| 赤兔流量卡办理| 久久亚洲国产成人精品v| 少妇被粗大的猛进出69影院| 九草在线视频观看| 亚洲国产毛片av蜜桃av| 激情视频va一区二区三区| 国产 精品1| 久久久久网色| a 毛片基地| 一区二区三区四区激情视频| 黑人欧美特级aaaaaa片| 青青草视频在线视频观看| 久久热在线av| 免费久久久久久久精品成人欧美视频| www.精华液| 一级片'在线观看视频| 午夜激情久久久久久久| 亚洲国产av新网站| 久久久久人妻精品一区果冻| 天美传媒精品一区二区| 国产综合精华液| 久久热在线av| 午夜91福利影院| 9191精品国产免费久久| 久久韩国三级中文字幕| 亚洲,一卡二卡三卡| 色播在线永久视频| videossex国产| 久久国产亚洲av麻豆专区| 婷婷色综合www| 中文字幕亚洲精品专区| 高清在线视频一区二区三区| 亚洲国产成人一精品久久久| 精品亚洲成a人片在线观看| 99re6热这里在线精品视频| 捣出白浆h1v1| 欧美激情极品国产一区二区三区| 欧美最新免费一区二区三区| 黄色配什么色好看| 一级毛片黄色毛片免费观看视频| 熟女电影av网| 亚洲一码二码三码区别大吗| 99热国产这里只有精品6| 99久久中文字幕三级久久日本| 亚洲综合色惰| av免费在线看不卡| 国产精品 欧美亚洲| 男女高潮啪啪啪动态图| 久久久久人妻精品一区果冻| 最近中文字幕高清免费大全6| 亚洲色图综合在线观看| 午夜日韩欧美国产| 曰老女人黄片| 久久精品夜色国产| 亚洲精品成人av观看孕妇| 精品一区二区三卡| 不卡视频在线观看欧美| 99精国产麻豆久久婷婷| 伊人亚洲综合成人网| 丝袜人妻中文字幕| 欧美激情高清一区二区三区 | 9色porny在线观看| 亚洲欧洲精品一区二区精品久久久 | 最近中文字幕高清免费大全6| 精品国产国语对白av| 久久 成人 亚洲| 亚洲欧美日韩另类电影网站| 国产精品蜜桃在线观看| 亚洲国产看品久久| 飞空精品影院首页| 有码 亚洲区| 丝袜美足系列| 婷婷成人精品国产| 精品少妇黑人巨大在线播放| 色吧在线观看| 99久久综合免费| 久久青草综合色| 又黄又粗又硬又大视频| 亚洲第一av免费看| 日韩在线高清观看一区二区三区| 女人久久www免费人成看片| 久久久久久久久久久久大奶| 日本av免费视频播放| av天堂久久9| 咕卡用的链子| 亚洲欧美日韩另类电影网站| 亚洲国产精品国产精品| 久久久精品免费免费高清| 亚洲av电影在线进入| 亚洲av中文av极速乱| 成人手机av| 啦啦啦视频在线资源免费观看| 成人国产av品久久久| 91国产中文字幕| 在线天堂最新版资源| 伦理电影大哥的女人| 久久精品人人爽人人爽视色| 久久国产精品大桥未久av| 久久国产精品大桥未久av| 国产免费福利视频在线观看| 久久久久久久久久久久大奶| 视频区图区小说| 十分钟在线观看高清视频www| 大码成人一级视频| 熟女少妇亚洲综合色aaa.| 亚洲人成电影观看| 久久99蜜桃精品久久| 国产一区二区三区av在线| 亚洲精品,欧美精品| 永久网站在线| 久久精品国产自在天天线| 亚洲国产av影院在线观看| 搡女人真爽免费视频火全软件| 精品久久蜜臀av无| 日韩av免费高清视频| 久久午夜综合久久蜜桃| 亚洲,一卡二卡三卡| 午夜91福利影院| 欧美精品亚洲一区二区| 日韩一区二区三区影片| 一区二区三区激情视频| 久久精品熟女亚洲av麻豆精品| 精品亚洲成a人片在线观看| 国产精品香港三级国产av潘金莲 | 色视频在线一区二区三区| 9色porny在线观看| 久久 成人 亚洲| 久久久久精品性色| 香蕉精品网在线| 亚洲国产欧美日韩在线播放| 免费黄网站久久成人精品| 久久av网站| 亚洲中文av在线| 成年美女黄网站色视频大全免费| 日韩av在线免费看完整版不卡| 最新的欧美精品一区二区| 哪个播放器可以免费观看大片| 男女啪啪激烈高潮av片| 亚洲国产av影院在线观看| 激情五月婷婷亚洲| 一区二区三区乱码不卡18| 午夜91福利影院| 自拍欧美九色日韩亚洲蝌蚪91| 国产av国产精品国产| 99热全是精品| 国产精品久久久av美女十八| 久久鲁丝午夜福利片| 精品一区二区三区四区五区乱码 | 久久精品国产鲁丝片午夜精品| 国产高清不卡午夜福利| 免费人妻精品一区二区三区视频| 一级片'在线观看视频| 亚洲国产欧美网| 老熟女久久久| 边亲边吃奶的免费视频| 看非洲黑人一级黄片| 在线观看美女被高潮喷水网站| www日本在线高清视频| 波多野结衣一区麻豆| 欧美中文综合在线视频| 久久久精品区二区三区| 国产精品国产av在线观看| 成人手机av| 国产精品久久久久久av不卡| 晚上一个人看的免费电影| 色94色欧美一区二区| 满18在线观看网站| 日韩中字成人| 国精品久久久久久国模美| 蜜桃国产av成人99| 国产xxxxx性猛交| 在线天堂最新版资源| 99热网站在线观看| 一级片'在线观看视频| 黄片播放在线免费| 国语对白做爰xxxⅹ性视频网站| 男女午夜视频在线观看| 99国产精品免费福利视频| 视频在线观看一区二区三区| 在线观看三级黄色| 国语对白做爰xxxⅹ性视频网站| 国产爽快片一区二区三区| 精品人妻在线不人妻| 免费观看在线日韩| 免费高清在线观看日韩| 日韩av免费高清视频| 精品久久久精品久久久| 女人被躁到高潮嗷嗷叫费观| 亚洲精品一二三| 久久久久久久久久久久大奶| 大片电影免费在线观看免费| 男男h啪啪无遮挡| 在线观看免费视频网站a站| 欧美日韩一区二区视频在线观看视频在线| 欧美日韩视频精品一区| 这个男人来自地球电影免费观看 | 午夜福利视频精品| 一级黄片播放器| 日韩中文字幕欧美一区二区 | 日韩,欧美,国产一区二区三区| 亚洲一区二区三区欧美精品| 国产亚洲午夜精品一区二区久久| 午夜福利影视在线免费观看| 国产xxxxx性猛交| 亚洲av日韩在线播放| 久久久精品94久久精品| 狠狠精品人妻久久久久久综合| 美女高潮到喷水免费观看| 街头女战士在线观看网站| 在线观看美女被高潮喷水网站| 日韩视频在线欧美| 纵有疾风起免费观看全集完整版| 日韩av免费高清视频| 欧美 亚洲 国产 日韩一| 人人妻人人爽人人添夜夜欢视频| 69精品国产乱码久久久| 卡戴珊不雅视频在线播放| 国产不卡av网站在线观看| 两性夫妻黄色片| 久久 成人 亚洲| 熟妇人妻不卡中文字幕| 亚洲精华国产精华液的使用体验| 国产av精品麻豆| 精品久久蜜臀av无| 99久国产av精品国产电影| 国产xxxxx性猛交| 在线天堂最新版资源| av国产精品久久久久影院| 国产精品久久久久久久久免| 大香蕉久久网| 亚洲国产av新网站| 美国免费a级毛片| 最近的中文字幕免费完整| 色网站视频免费| 国产男人的电影天堂91| 国产精品熟女久久久久浪| 90打野战视频偷拍视频| 日韩大片免费观看网站| 国产精品嫩草影院av在线观看| 18+在线观看网站| 国产极品天堂在线| 中文乱码字字幕精品一区二区三区| 国产精品香港三级国产av潘金莲 | 欧美精品高潮呻吟av久久| 午夜福利在线免费观看网站| 成人毛片a级毛片在线播放| 少妇被粗大猛烈的视频| 亚洲成人手机| 欧美黄色片欧美黄色片| 亚洲av国产av综合av卡| 久久精品久久精品一区二区三区| 老熟女久久久| 丝袜美腿诱惑在线| 大片电影免费在线观看免费| 国语对白做爰xxxⅹ性视频网站| 国产精品麻豆人妻色哟哟久久| 国产成人精品久久二区二区91 | av天堂久久9| 久久精品熟女亚洲av麻豆精品| 男女边摸边吃奶| 天美传媒精品一区二区| 亚洲少妇的诱惑av| 最近最新中文字幕大全免费视频 | 91国产中文字幕| 国产亚洲午夜精品一区二区久久| 国产亚洲午夜精品一区二区久久| 久久这里有精品视频免费| 久久这里只有精品19| 国产精品久久久久久精品古装| 90打野战视频偷拍视频| av片东京热男人的天堂| 久久久久久人人人人人| 午夜福利在线观看免费完整高清在| 久久久久视频综合| 99热全是精品| 人成视频在线观看免费观看| 久久ye,这里只有精品| 亚洲成人一二三区av| 久久精品人人爽人人爽视色| 大片电影免费在线观看免费| 麻豆乱淫一区二区| 夫妻性生交免费视频一级片| 爱豆传媒免费全集在线观看| 大片免费播放器 马上看| 午夜福利,免费看| 最近最新中文字幕免费大全7| 女性被躁到高潮视频| 亚洲,欧美,日韩| 国产一区二区 视频在线| 久久人人97超碰香蕉20202| 天堂中文最新版在线下载| √禁漫天堂资源中文www| 欧美 日韩 精品 国产| 国产极品天堂在线| 99九九在线精品视频| 精品一区二区三卡| 人成视频在线观看免费观看| 五月天丁香电影| 欧美黄色片欧美黄色片| 亚洲av男天堂| 水蜜桃什么品种好| 伊人亚洲综合成人网| 成人漫画全彩无遮挡| 成年av动漫网址| 午夜av观看不卡| 久久国内精品自在自线图片| 亚洲人成网站在线观看播放| 精品国产乱码久久久久久小说| 亚洲成人一二三区av| 寂寞人妻少妇视频99o| 人人澡人人妻人| av线在线观看网站| 青草久久国产| 国产精品久久久久成人av| 十八禁网站网址无遮挡| 亚洲伊人久久精品综合|