• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Enhancing Collaborative and Geometric Multi-Kernel Learning Using Deep Neural Network

    2022-11-11 10:46:50BareeraZafarSyedAbbasZilqurnainNaqviMuhammadAhsanAllahDittaUmmulBaneenandMuhammadAdnanKhan
    Computers Materials&Continua 2022年9期

    Bareera Zafar,Syed Abbas Zilqurnain Naqvi,Muhammad Ahsan,Allah Ditta,Ummul Baneen and Muhammad Adnan Khan

    1Department of Mechatronics and Control Engineering,University of Engineering and Technology,Lahore,54890,Pakistan

    2Department of Information Sciences,Division of Science and Technology,University of Education,Lahore,54000,Pakistan

    3Riphah School of Computing&Innovation,Faculty of Computing,Riphah International University,Lahore Campus,Lahore,54000,Pakistan

    4Department Software,Gachon University,Seongnam,13120,Korea

    Abstract: This research proposes a method called enhanced collaborative and geometric multi-kernel learning(E-CGMKL)that can enhance the CGMKL algorithm which deals with multi-class classification problems with non-linear data distributions.CGMKL combines multiple kernel learning with softmax function using the framework of multi empirical kernel learning(MEKL)in which empirical kernel mapping(EKM)provides explicit feature construction in the high dimensional kernel space.CGMKL ensures the consistent output of samples across kernel spaces and minimizes the within-class distance to highlight geometric features of multiple classes.However, the kernels constructed by CGMKL do not have any explicit relationship among them and try to construct high dimensional feature representations independently from each other.This could be disadvantageous for learning on datasets with complex hidden structures.To overcome this limitation, E-CGMKL constructs kernel spaces from hidden layers of trained deep neural networks (DNN).Due to the nature of the DNN architecture, these kernel spaces not only provide multiple feature representations but also inherit the compositional hierarchy of the hidden layers, which might be beneficial for enhancing the predictive performance of the CGMKL algorithm on complex data with natural hierarchical structures, for example, image data.Furthermore, our proposed scheme handles image data by constructing kernel spaces from a convolutional neural network(CNN).Considering the effectiveness of CNN architecture on image data, these kernel spaces provide a major advantage over the CGMKL algorithm which does not exploit the CNN architecture for constructing kernel spaces from image data.Additionally,outputs of hidden layers directly provide features for kernel spaces and unlike CGMKL,do not require an approximate MEKL framework.E-CGMKL combines the consistency and geometry preserving aspects of CGMKL with the compositional hierarchy of kernel spaces extracted from DNN hidden layers to enhance the predictive performance of CGMKL significantly.The experimental results on various data sets demonstrate the superior performance of the E-CGMKL algorithm compared to other competing methods including the benchmark CGMKL.

    Keywords:CGMKL;multi-class classification;deep neural network;multiplekernel learning;hierarchical kernel spaces

    1 Introduction

    Machine learning has become necessary nowadays in every sector of life.Machine learning methodologies for binary classification like decision trees, Support Vector Machines, K-nearest neighbor, neural network, and Na?ve Bayes can be extended for multi-class classification [1].The most common traditional techniques used for multiclass classification are one-vs.-one (OVO)and one-vs.-all (OVA).In OVA a single classifier is designed per class i-e if there are K classes then K classifiers are constructed.When data is given as input to the classifiers then the classifier that belongs to a particular class gives the highest probability value for that class [2].But an imbalance problem exists in OVA because single class samples are much less as compared to the number of samples in the remaining classes[3].OVO approach divides the problem intobinary classifiers or sub-problems.The outputs of these base classifiers are combined to get the final output [2].The results show that OVO performance is better as compared to OVA [4].But the OVO method is encountered by noncompetence problems[5].In both of the methods,OVO and OVA,there is an issue of computational inefficiency due to the formation of several binary classifiers.To overcome the limitations mentioned above, the softmax function is used which requires much easier parameter learning strategies and optimizes a single log-likelihood function during the training phase.The outcome of a softmax function is a monomial probability distribution onknumber of possible classes.The class with the highest probability is the predicted class [6].For a given test sample, softmax directly outputs the probability of that sample belonging to a particular class,hence avoiding the need to develop multiple binary classifiers.

    Although the softmax function deals well with multi-class classification problems when the data distributions have linear decision boundaries, it encounters performance degradation on nonlinear data sets.This problem is addressed by employing kernel methods that map the original data points to higher dimensional feature space to make the classification task easier.Some kernel methods construct implicit feature representations in the higher dimensional space through the use of kernel function, but this approach does not apply to softmax function.To combine kernel methods with softmax function, empirical kernel mapping (EKM)is used [7].EKM maps each samplexinto an explicit feature vectorφe(x) in the high dimensional kernel space.Onceφe(x) is computed it can be easily inserted into the softmax function[8].Introduces the collaborative and geometric multi-kernel learning (CGMKL)algorithm which effectively incorporates the multiple empirical kernel learning(MEKL)framework into the softmax function.CGMKL enhances the expression of sample data through empirical feature spaceφeand enriches the classification ability of the softmax function.Additionally, CGMKL ensures collaborative working of softmax function among different kernel spaces and improves classification of all kernel spaces by controlling output trends of input samples.To accomplish these tasks,CGMKL employs two regularization terms:RTUnandRTSn.The termRTUnhelps softmax function to collaboratively work in different kernel spaces.This term harmonizes the information between different kernel spaces and provides consistent outputs of samples in them.The termRTSnreduces the within-class distance and improves the classification capability of all kernel spaces.However, the kernels constructed by CGMKL do not have any explicit relationship among them, and hence try to construct high dimensional feature representations independently from each other.This could be disadvantageous for learning on datasets with complex hidden structures.To address this issue, one possible solution is to construct kernel spaces from hidden layers of trained deep neural networks (DNN).Due to the nature of the DNN architecture, these kernel spaces not only provide multiple feature representations but also inherit the compositional hierarchy of the hidden layers which might be beneficial for improving the predictive performance of the CGMKL algorithm on complex data.

    Deep learning is a subset of machine learning that employs deep neural network architectures in which multiple hidden layers help to learn a more abstract representation of data as we move progressively from the shallow to the deeper layers.Deep learning methodologies surpass traditional machine learning algorithms especially when the data becomes huge or problems become complex,for example, image classification, natural language processing (NLP), and speech recognition problems[9].Usually, in deep neural networks, only the last hidden layer is used to get the final output and intermediate representations of hidden layers are used for preparing a more abstract representation of the output layer.In[10]KerNET method is proposed and applied on two renowned architectures i-e multi-layer perceptron (MLP)and convolutional neural network (CNN).Motivated from [10] our paper likewise exploits the information of intermediate representations extracted from the hidden layers of a trained DNN.The output of the nth hidden layer“φn”is used as a feature vector to make a base kernelThe multiple kernel spaces constructed from these hidden layers inherit the compositional hierarchy of the hidden layers.This hierarchical structure might help improve the predictive performance of the classification algorithm on data sets with complex hidden structures.

    This paper proposes an enhanced collaborative and geometric multi-kernel learning(E-CGMKL)algorithm that can enhance the CGMKL algorithm,and deal with complex multi-class classification problems with non-linear data sets.Below we provide a list that highlights the advantages of our proposed method over CGMKL:

    (a)The kernels constructed by CGMKL do not have any explicit relationship among them and try to construct high dimensional feature representations independently from each other.This could be disadvantageous for learning on datasets with complex hidden structures.To overcome this limitation, E-CGMKL constructs kernel spaces from hidden layers of trained deep neural networks(DNN).Due to the nature of the DNN architecture,these kernel spaces not only provide multiple feature representations but also inherit the compositional hierarchy of the hidden layers, which might be beneficial for enhancing the predictive performance of the CGMKL algorithm on complex data with natural hierarchical structures, for example,image data.

    (b)Furthermore,our proposed scheme handles image data by constructing kernel spaces from a convolutional neural network(CNN).Considering the effectiveness of CNN architecture on image data,these kernel spaces provide a major advantage over the CGMKL algorithm which does not exploit the CNN architecture for constructing kernel spaces from image data.

    (c)Additionally, outputs of hidden layers directly provide features for kernel spaces and unlike CGMKL,do not require an approximate MEKL framework.

    (d)E-CGMKL combines the consistency and geometry preserving aspects of CGMKL with the compositional hierarchy of kernel spaces extracted from DNN hidden layers to enhance the predictive performance of CGMKL significantly.

    This paper is organized as follows.Section 2 briefly describes existing work related to our research.Section 3 provides a detailed description of our proposed E-CGMKL algorithm.Section 4 presents and discusses experimental results.In the last section,conclusions are provided.

    2 Related Work

    Kernel method is a remarkable technique that converts nonlinear complex pattern data to a linear pattern in high dimensional reproducing hilbert kernel space(RHKS)[11].

    2.1 Kernel Methods

    Kernel Methods introduced in[12,13]such as support vector machines(SVM),Gaussian kernel,kernel principal component analysis(PCA),and kernel fisher discriminant analysis(KFDA)are wellestablished machine learning methods.These kernel methods use kernel functionsK: XxX →Rto implicitly map non-separable input data to a possibly high dimensional feature space by defining the inner product in that space:Fig.1 shows the Kernel function mapping of non-separable input data to separable high dimensional kernel space.

    Figure 1:Graphical illustration of kernel mapping

    2.2 Multiple Kernel Learning

    The success of a kernel-based learning algorithm depends on appropriate kernel selection.Choice of the kernel is a fundamental problem of kernel methods.Various evaluation measures of kernel function for modal selection have been proposed such as kernel polarization [14], cross-validation,kernel alignment [15], and spectral analysis [16,17].Yet, these mechanisms cannot guarantee an optimal selection of kernel functions for the good performance of kernel-based classifiers.To address this problem,a popular technique called multiple kernel learning(MKL)has attracted the attention of many researchers.MKL combines a set of base kernels constructed by using different kernel functions.The optimal combination of base kernels can automatically learn the importance of each feature[18-20].Different algorithms have been proposed that try to improve the learning efficiency of MKL by exploiting various optimization techniques: Simple-MKL [21], Easy-MKL [22], and stochastic variance reduced gradient SVRG-MKL[23].To improve the regular MKL,extended MKL techniques have been proposed e-g Localized MKL (LMKL), novel sample-wise alternating optimization for training LMKL [24]; Sample Adaptive MKL, adaptively switch base kernels corresponding to each sample[25];Bayesian Maximum Margin MKL,improves the learning speed and generalization ability by defining a multiclass likelihood function that accounts for margin loss for kernelized classification[26]; and Multiple Empirical Kernel Learning (MEKL)which explicitly represents the samples by mapping input space to explicit feature space[27].

    2.3 Deep Learning and Conventional Machine Learning

    There are some traditional machine learning algorithms used for multi-class classification namely random forest, support vector machine (onevs. one)SVM (OVO), support vector machine (onevs.all)SVM(OVA),and BPNN.Random forest is an ensemble method introduced by Breiman in 2001[28], it involves a group of tree-structured predictors in learning process.This approach is used in[29]for diabetes classification effectively.But this method consists of a lot of trees in learning which makes it computationally expensive and is also not known to perform well on image and audio data.SVM (OVO)and SVM (OVR)are the most common classification techniques in which SVM(OVO)divides the problem intobinary classifiers and in SVM(OVA)single classifier is used per class.Reference [30] implements OVO and OVA approach to extract discriminative features and to predict multiclass motor imagery features respectively.Both of the methods encounter the issue of computational inefficiency because of the formation of several binary classifiers.Back propagation neural network (BPNN)is a supervised learning algorithm used for classification.In [31] BPNN algorithm is designed to predict the silicon oxide content during chemical analysis estimated according to rock main oxide.In BPNN only the last layer representation gives the classification result but intermediate representations can be used for the main classification task.

    In deep kernel learning, there are three current research directions.The first one is to form a synergy model in which a front-end deep module is used and a kernel machine is used at the back-end.This type of work has been done in[32]which presented a hybrid system in which a CNN identifies generic objects, and the features learned through CNN are used in training Gaussian-kernel SVM.This type of work can also be found in [33].In the second direction, the kernel method is fitted in deep architectures.Reference [34] introduces a convolutional kernel network (CKN)which fills the gap between kernels and neural network literature.Kernels in CKN produce representations of the images in a sequence built-in multilayer fashion and these layers are named image feature maps.Similar work has been done by Reza et al.in[35].The third direction is to combine the idea of deep kernel learning and optimization.Reference[36]introduces scalable deep learning which combines the structural properties of deep neural networks with the nonparametric flexibility of kernel methods,by presenting scalable probabilistic gaussian processes.Semi-supervised deep kernel learning [37] is the extension of this work[11].

    In the context of complex feature learning using deep nets [38] recently proposed TBE-net algorithm.TBE-Net provides complex feature learning and leads to more integral diverse vehicle features.Recently [39] proposed RSOD algorithm for small object detection which smartly exploits the feature maps generated by the shallow and deep layers as well as employs an attention mechanism to improve the detection accuracy.In[10]Lauriola et al.proposed a deep learning framework named as KerNet.This framework combines the hidden layer representations optimally through the multikernel learning (MKL)framework.This combination improves the quality and performance of the final representation.Each hidden layer output has a feature vectorφ(x)corresponding to each input samplexwhich is used to build the base kernelsThese base kernels are then combined using the MKL framework i-eK=lμlKl.Fig.2 illustrates this process.In [8]Wang et al.introduce the CGMKL algorithm for multiclass classification which ensures consistent outputs of samples across multiple kernel spaces and minimizes the within-class distance to highlight the clustering of different classes but it does not use structurally related kernel spaces.In this paper,a hybrid mechanism is proposed which combines the kerNET framework with CGMKL.We exploit the intermediate representations extracted from hidden layers of trained DNN to construct multiple kernel spaces which inherit the compositional hierarchy of the hidden layers.These kernel spaces are then used in the CGMKL method to enhance its predictive performance.

    Figure 2:Neural network architecture showing the inner representation of input samples at nth hidden layer φn and arranged sequence of nonlinear transformations μ1,...,μn

    3 Methodology

    Deep neural network architecture output depends on the compositional sequence of non-linear mapping that conveys progressively complex representations of input data.In this paper,we are using intermediate representations extracted from a hidden layer of DNN.The mathematical model of these intermediate representations is described by the functionφn(x)which is the composite function given by:

    whereμn(x)=g(Wn(x)+b)represents a nonlinear transformation fromn-1 tonthhidden layers andφ0=xis the input sample as illustrated in Fig.2.

    3.1 Multiple Kernel Spaces Extracted from DNN

    E-CGMKL algorithm can be applied to any DNN architecture.There are two main architectures used in this work namely MLP and CNN.In MLP architecture the neural network weights are fully trained using Keras and Tensor-flow libraries by selecting the best hyperparameters.Afterward,the output of the nth hidden layer of the trained network is used to make the nth base kernelThe kernel spaces constructed from these hidden layers inherit the compositional hierarchy of the hidden layers and hence,enhance the ability of the softmax function to classify nonseparable datasets with complex hidden structures.In the case of CNN,the output of the hidden layer is not in the form of a single vector as in the case of MLP architecture, instead, it is in the form of a tensor whose dimension depends on the pixels of the input image times the number of filters used in the hidden layer.So,the output of the hidden layer is flattened before using it as a feature vector.CNN architecture also has different types of layers e.g., convolutions, pooling, dropout, and dense from which only the representations of convolution and dense layers have been used.The notationφn(x)used for CNN is depicted in Fig.3.

    Figure 3: Convolutional neural network (CNN)architecture is composed of convolution, pooling,and fully connected layers.As the figure illustrates the intermediate representation φn is defined by the output convolution and fully connected layer

    3.2 E-CGMKL Algorithm

    A brief mathematical description of the CGMKL algorithm [8] is given below.The overall loss function of the CGMKL algorithm is given as follows.

    In Eq.(2)Lstm(fn)is the loss of softmax function written as below:

    If there areNnumber of samples in training set thenNsamples are defined aswhich represents the feature vector innthkernel space corresponding toithtraining sample andWnis the weight matrix ofnthkernel space which is learned during parameter learning.IfRis the number of neuron in DNN hidden layer andCis the number of classes then the dimension ofWnmatrix isR X CIn Eq.(2)RTUnis the regularization term that helps softmax function to collaboratively work in different kernel spaces.This term harmonizes the information between different kernel spaces and provides consistent outputs of samples in them andσis the parameter that controls the importance ofRTUn.The detailed term is given as:

    In Eq.(4)Xinrepresents matrix form of intermediate representation of each sampleDimension ofXinisN X (R+1)RTSnis another regularization term in Eq.(2).To exhibit a geometric feature of classification results this term reduces the within-class distance andδis the parameter that controls the importance ofRTSn.In this term,Snis the scatter matrix inlthkernel space.The formula ofRTSnis given below:

    In Eq.(5)Snfor each kernel space is calculated by taking the mean of intermediate representation of samples in each class inφn(x).Next the class mean of intermediate representation is subtracted from the corresponding intermediated representation of that class.Dimension ofSnisR X R.Derivative of loss function of Eq.(2)with respect toWnis given as:

    For parameter learning,RMSProp is used as an optimization method.To reach global minima faster,RMSprop uses the memory matrixδ?Wmto store the previous gradient.

    Finally,RMSProp updates the weight equation as:

    In the above Eq.(7)βis the RMSProp parameter and in Eq.(8)αis the learning rate.A slight modification in the prediction step of the CGMKL algorithm is also proposed.Unlike CGMKL the following equation is used to predict the label for the test sample.

    where‘μl’are the weights given by the EASYMKL algorithm(multiple kernel learning algorithm).These weights measure the importance of each kernel and provide an optimal combination of kernel spaces for test sample prediction.

    3.3 Training the E-CGMKL Algorithm

    The training of our proposed algorithm consists of two phases.In the first phase,a deep neural network(DNN)is trained with the best hyperparameter settings.Then the feature vectors for kernel spaces are extracted from the outputs of the hidden layers of the trained DNN.Each hidden layer generates its own kernel space.These feature vectors are extracted for each data sample given as input to the trained DNN.In the second phase, these feature vectors are given as input to the CGMKL algorithm.Based on the multiple kernel spaces extracted from trained DNN,the CGMKL algorithm learns its weight parameters using the variant of the gradient descent algorithm known as the RMSPROP algorithm.The pseudo-code of the proposed E-CGMKL method is listed in Tab.1.

    Table 1:Learning process of enhanced collaborative and geometric multi-kernel learning(E-CGMKL)method

    Table 1:Continued

    4 Experiment

    This section introduces the data sets used for evaluating the effectiveness of E-CGMKL.

    4.1 Datasets

    Eight multi-class classification data sets selected from UCI Repository are used in this research.The names of the datasets are as follows:Seeds,Wine,Balance Scale,Hayes Roth,Accent recognition,JAFFE, and Vehicle.Hayes-Roth is the dataset for human subjects’transfer test.The data has four attributes namely:hobby,age,educational level,and marital status.The class is divided into nominal value between 1 and 3.Wine is the dataset of chemical analysis results of wine grown in Italy derived from three cultivars but have same region.The attributes include 13 constituents quantities found during analysis.Seeds dataset consists of 7 geometric parameters of wheat kernels belonging to 3 different varieties of wheat:Rosa,Kama,and Canadian.JAFFE is the image dataset of 10 Japanese female expressers with 7 posed expressions:happy,sad,fear,angry,disgusting,surprised,and neutral.The dataset consists of 210 images of 256 pixels.Speaker accent recognition is an audio dataset of people accents belonging to 6 countries:Spain,Georgia,France,Italy,United Kingdom,and United States of America.Led7digits is a dataset of LED display with 7 light-emitting diodes hence there are 7 Boolean attributes having 10 concepts ranging between 0 and 9.Balance Scale is the dataset classified according to the balance scale tip which is to the right,left or balanced.Vehicle is the dataset of silhouette of 4 types of vehicles: opel, saab, bus, and van.Different angle of views were taken to classify vehicles according to 18 attributes.Pen-based recognition of handwritten digits is the dataset of 10 digits between 0 and 9 written by 44 writers with 16 attributes having integers ranging between 0 and 100.These data sets are medium-sized ranging from 160 to 10,992 samples.Data sets with diverse characteristics having different numbers of input features,attributes,and classes are selected to analyze the effectiveness of the proposed algorithm with varied constraint conditions.The number of classes and number of attributes of these data sets is in the range of 3-10 and 4 256 respectively.Description of the datasets is given in Tab.2.Preprocessing of input features is done through normalization to get a symmetric and fast converging cost function.Classification accuracy is used as a metric to evaluate the performance of the E-CGMKL method.

    Table 2: Dataset description

    4.2 Competing Algorithms

    The proposed scheme is compared with six baseline methods:CGMKL,BPNN,kerNET,SVM(OVO),SVM(OVR),and Random Forest.A brief description of baseline methods is given below:

    ? CGMKL: collaborative and geometric multi-kernel learning, the reference method in which multiple RBF kernels are utilized.

    ? BPNN:backpropagation neural network with hidden layers,where the final output is extracted from the last layer as in MLP architecture.

    ? kerNET: an ensemble algorithm that utilizes EASYMKL for a combination of base kernels then SVM learning is used to train the model.

    ? SVM(OVA):support vector machines(onevs.all),it is a classical method that splits the data into multiple binary classification problems and then binary classifiers are trained for each problem to classify multi-class data.

    ? SVM (OVO): support vector machines (onevs. one), it is a classical method that splits the multiclass data into binary classification problems for a single classvs. every other class to classify multi-class data.

    ? Random Forest:it is the state-of-the-art multi-class classification algorithm involving decision trees.It follows an ensemble learning approach.

    4.3 Experimental Setting

    In experiments,5 fold cross-validation is used in which data is divided into 5 folds from which 4 folds are used for training and the remaining 1 fold is used for testing.This process is repeated until each fold is used for testing and then the final result is evaluated by taking the average of test accuracies obtained from each of the five folds.The same partition is used for 5 fold cross-validation for each of the competing algorithms to get unbiased results.To train BPNN the number of hidden layers is set to 2 and the number of neurons per layer is set to 50 and 25 for the first and second layers respectively.ADAM is used as an optimization algorithm and ReLU is used as an activation function.Training of the BPNN is stopped when the loss function converges.For CNN architecture,two convolutional layers are used with 6 and 12 5×5 filters respectively.After each convolutional layer,two Max-pool layers are used having a size of 1x1 with a stride of 2.After the Convolutional and Max-pool layers,a dense layer is placed with 128 neurons.Since 2 hidden layers are used,the trained DNN gives 3 base kernels.The proposed algorithm gives the best result using 3 kernels that is why 2 hidden layers are used in DNN.The learning rate of E-CGMKLαis set to 0.01.RMSProp parametersγ,β, and?are set to 0.1,0.9,and 10-8respectively.The controlling parameters of regularization termsδandσare selected from [0.01, 0.1, 1, 10,100].The learning of E-CGMKL is stopped when the difference of two consecutive losses is below the threshold value of 10-4.In kerNET,base kernels are combined through the EasyMKL algorithm.EasyMKL has a hyperparameterλ,the value ofλis selected from[0, 0.1, 0.2, 0.5, 0.9, 1].For class prediction,kerNET employs SVM(OVO).SVM(OVO)contains the hyperparameter C whose value is selected from[10n,n=-3...3].The proposed method is also compared with SVM(OVO)and SVM(OVR).These algorithms have hyperparameters C and gamma respectively.The value of C and gamma is selected from {0.01, 0.1, 1, 10,100}.The RBF kernel is used in both SVM(OVO)and SVM(OVR).In the Random forest algorithm,the number of decision trees is set to 100.Parameters of classification algorithms selected for comparison with E-CGMKL are adjusted to get the highest accuracy results.The hyperparameters of all the algorithms are selected using k-fold cross-validation.

    5 Results

    Tab.3 presents the testing accuracy of all the classification algorithms used for comparison.As can be seen from Tab.3,BPNN and Random forest do not show improvement in performance as compared to CGMKL.However,SVM(OVO)and SVM(OVR)lead to significant improvement in average test accuracy compared to CGMKL.E-CGMKL outperforms all the competing algorithms in terms of test accuracy averaged over all datasets.E-CGMKL also achieves higher test accuracy compared to the other methods in 7 out of 9 datasets.Compared to the benchmark CGMKL method,E-CGMKL shows a 1.33%improvement in average test accuracy.This indicates the benefit of constructing multiple kernel spaces from the hidden layers of the trained neural network.As compared to BPNN, ECGMKL shows an improvement of 2.1%test accuracy.This also indicates the advantage of exploiting intermediate representations of the hidden layers instead of using the output layer for classification results.E-CGMKL also outperforms the KerNET algorithm which demonstrates the crucial role played by the CGMKL algorithm in imposing the consistency constraint across multiple kernel spaces as well as preserving a geometric feature suitable for classification.Compared to Random Forest,ECGMKL shows an improvement of 6.12%test accuracy.As can be seen from Tab.3,E-CGMKL gives better test accuracy compared to SVM(OVO),SVM(OVR),and Random Forest in 7 out of 9 data sets.Fig.4 shows the test accuracies of data sets individually and Fig.5 shows the average test accuracy results.

    Table 3: Test accuracy result

    Figure 4:Test score of used data sets on multi-class classification

    Figure 5:Average test score of comparison algorithms

    5.1 Parameter Analysis

    As discussed in the methodology section,δis the parameter that controls the importance ofRTSnregularization term in Eq.(2).To exhibit a geometric feature of classification results this term reduces the within-class distance.To illustrate the role of parameterδthe data set having three classes is taken as shown in Fig.6.The red square, the blue star, and the black triangular represent classes one,two,and three respectively.Fig.6 shows the change in the distribution of points in kernel space for the three classes as the value ofδincreases.The sub-figure(a)demonstrates the original untrained data set.The value ofσis fixed to 0.01 and the value ofδis varied from 0.01 to 100.The sub-figures from(b)to(f)in Fig.6 show that by increasing the value ofδthe points belonging to the same class form a tight cluster in the high dimensional kernel space.These plots demonstrate the effect ofRTSnregularization term on the distribution of class-specific points in the kernel space.Experiments reveal that classification performance with smaller values ofδandσis better as compared to larger values.

    Figure 6: (Continued)

    Figure 6: Visualizing the significance of the regularization term δ sub-figure (a)shows the original data set and other sub-figures show how classes with the same colors come closer when the value of δ increases gradually

    5.2 Convergence Comparison of E-CGMKL and CGMKL

    This section discusses the convergence behavior of E-CGMKL and CGMKL.To accelerate the gradient descent algorithm RMSProp is used in both algorithms.The convergence of both the algorithms on all data sets can be seen in Fig.7.It is evident from the graphs that convergence of E-CGMKL is faster as compared to the CGMKL algorithm, especially, the convergence rate of ECGMKL on Hayes Roth,Balance scale,and vehicle datasets is significantly faster than CGMKL.

    Figure 7: (Continued)

    Figure 7: Convergence comparison of E-CGMKL and CGMKL.In the sub-figures of convergence graphs x axis represents the number of iterations and y axis represents the loss after each iteration

    6 Conclusion

    This paper proposes the E-CGMKL algorithm which enhances the CGMKL algorithm to classify multi-class data with nonlinear data distribution.The softmax function is a good state-of-the-art method to do multi-class classification for linearly distributed data,but when the decision boundary is nonlinear it suffers from performance degradation.Hence,to deal with nonlinear data,CGMKL combines softmax function with multi-kernel learning using the MEKL framework constructed from RBF kernels.In which EKM are feature vectorsΦn(x)corresponding to each data samplex.However,kernel spaces constructed in the CGMKL algorithm do not have any explicit relationship as they are constructed by independently setting the width parameter of the RBF kernel to different values.On the other hand,the kernel spaces constructed by the E-CGMKL algorithm inherit the compositional hierarchy of the DNN hidden layers and more effectively capture any complex latent structure in the data set.These hierarchical kernel spaces improve the performance of the CGMKL algorithm significantly on complex datasets such as image data sets.CGMKL ensures consistent outputs of samples across kernel spaces and minimizes the within-class distance to highlight the clustering of different classes.E-CGMKL combines the consistency and cluster preserving aspects of CGMKL with the hierarchical structure of kernel spaces extracted from DNN hidden representations to enhance its predictive performance.The experimental results on various data sets demonstrate the superior performance of the E-CGMKL algorithm compared to other competing methods including the benchmark CGMKL.The performance of E-CGMKL is further enhanced on image classification datasets as these data sets possess complex latent structure which is effectively captured by kernel spaces constructed from CNN.Possible future directions for our work include using parameterized activation functions to construct kernel spaces from DNN and employing transfer learning to efficiently construct kernel spaces for small-sized data sets.

    Acknowledgement:I would like to give special thanks to my friend,Mariam Naveed for supporting me in my work.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲精品乱码久久久v下载方式| 午夜日本视频在线| 少妇的逼水好多| 精品久久久噜噜| 男女那种视频在线观看| 国产一区有黄有色的免费视频 | 在现免费观看毛片| 老司机影院成人| 能在线免费看毛片的网站| 亚洲aⅴ乱码一区二区在线播放| 久久韩国三级中文字幕| av在线播放精品| 天天躁日日操中文字幕| 午夜福利视频1000在线观看| 欧美xxxx性猛交bbbb| 国产精品久久久久久久久免| 看片在线看免费视频| 人妻制服诱惑在线中文字幕| av在线亚洲专区| 国产精品综合久久久久久久免费| 日韩欧美三级三区| www日本黄色视频网| 哪个播放器可以免费观看大片| 久久久久久久久大av| 久久亚洲精品不卡| 亚洲av福利一区| 舔av片在线| 天美传媒精品一区二区| 观看美女的网站| 18禁在线播放成人免费| 日日摸夜夜添夜夜爱| 久久午夜福利片| 毛片女人毛片| 国产精品一区www在线观看| 大香蕉97超碰在线| 日韩 亚洲 欧美在线| 韩国高清视频一区二区三区| 啦啦啦啦在线视频资源| 国产精品久久电影中文字幕| 亚洲精品乱码久久久v下载方式| 国产综合懂色| 国产精品不卡视频一区二区| 亚洲性久久影院| 亚洲最大成人中文| 人妻夜夜爽99麻豆av| 精品人妻视频免费看| 日韩欧美三级三区| 日本猛色少妇xxxxx猛交久久| 视频中文字幕在线观看| 亚洲高清免费不卡视频| 日本欧美国产在线视频| 一级黄片播放器| a级毛色黄片| 免费黄网站久久成人精品| 中文字幕久久专区| 午夜免费男女啪啪视频观看| 伦理电影大哥的女人| 白带黄色成豆腐渣| 国产高潮美女av| 哪个播放器可以免费观看大片| 亚洲精品,欧美精品| 亚洲精品,欧美精品| 22中文网久久字幕| 亚洲乱码一区二区免费版| 亚洲精品乱码久久久久久按摩| 久久欧美精品欧美久久欧美| 国产精品久久久久久精品电影| 最近中文字幕高清免费大全6| 好男人在线观看高清免费视频| 真实男女啪啪啪动态图| 久久精品91蜜桃| 亚州av有码| 午夜免费激情av| 久久久久久久久久久丰满| 少妇高潮的动态图| 日韩制服骚丝袜av| 亚洲高清免费不卡视频| 老师上课跳d突然被开到最大视频| 人体艺术视频欧美日本| 久久人人爽人人爽人人片va| 人妻少妇偷人精品九色| 可以在线观看毛片的网站| 久久久久久九九精品二区国产| av免费在线看不卡| 大香蕉97超碰在线| 国产激情偷乱视频一区二区| 22中文网久久字幕| 国产精品精品国产色婷婷| 国产乱来视频区| 亚洲,欧美,日韩| 91av网一区二区| 狠狠狠狠99中文字幕| 亚洲欧美精品综合久久99| 99久久九九国产精品国产免费| 欧美日韩精品成人综合77777| 偷拍熟女少妇极品色| 国产v大片淫在线免费观看| www.av在线官网国产| 日韩精品有码人妻一区| 日韩亚洲欧美综合| 91精品一卡2卡3卡4卡| 日韩欧美精品v在线| 97在线视频观看| 国内精品宾馆在线| 91精品伊人久久大香线蕉| 亚洲精品影视一区二区三区av| 乱系列少妇在线播放| 午夜激情福利司机影院| 嘟嘟电影网在线观看| 观看免费一级毛片| 亚洲成av人片在线播放无| 全区人妻精品视频| 国产精品福利在线免费观看| 国产av不卡久久| 18禁在线播放成人免费| 一本久久精品| 亚洲欧美精品综合久久99| 成人一区二区视频在线观看| 成人高潮视频无遮挡免费网站| 中文亚洲av片在线观看爽| 精品人妻偷拍中文字幕| 精品久久久久久久久亚洲| 久久久久久大精品| 亚洲综合精品二区| 婷婷色av中文字幕| 国产精品野战在线观看| 成年女人看的毛片在线观看| 黑人高潮一二区| 大香蕉97超碰在线| 久久久久久久亚洲中文字幕| h日本视频在线播放| 国产亚洲5aaaaa淫片| av免费在线看不卡| 欧美bdsm另类| 午夜福利成人在线免费观看| 午夜视频国产福利| 亚洲精品乱久久久久久| 久久精品影院6| 一夜夜www| 精品无人区乱码1区二区| 人人妻人人看人人澡| 男女啪啪激烈高潮av片| 亚洲欧洲日产国产| 99热全是精品| 亚洲无线观看免费| 少妇人妻一区二区三区视频| 狠狠狠狠99中文字幕| 国产久久久一区二区三区| 中文亚洲av片在线观看爽| 精品午夜福利在线看| 好男人在线观看高清免费视频| 99久久精品一区二区三区| 精品人妻偷拍中文字幕| 成人漫画全彩无遮挡| 免费看日本二区| 三级经典国产精品| 七月丁香在线播放| 久久久国产成人免费| 赤兔流量卡办理| 亚洲乱码一区二区免费版| 亚洲aⅴ乱码一区二区在线播放| 久久精品国产鲁丝片午夜精品| 女人久久www免费人成看片 | 国产成人a区在线观看| 在线播放无遮挡| 欧美日韩综合久久久久久| 久久国产乱子免费精品| 成人特级av手机在线观看| 在现免费观看毛片| 国产又黄又爽又无遮挡在线| 国产精品一区二区在线观看99 | 熟女电影av网| 亚洲美女视频黄频| 两个人视频免费观看高清| 男人狂女人下面高潮的视频| 三级经典国产精品| 村上凉子中文字幕在线| 麻豆国产97在线/欧美| 哪个播放器可以免费观看大片| 小说图片视频综合网站| 全区人妻精品视频| 国产麻豆成人av免费视频| 在线a可以看的网站| 一级毛片我不卡| 看免费成人av毛片| 网址你懂的国产日韩在线| 中文乱码字字幕精品一区二区三区 | 91久久精品国产一区二区三区| 国产伦精品一区二区三区视频9| 乱码一卡2卡4卡精品| 国产精品久久久久久久电影| 只有这里有精品99| 美女被艹到高潮喷水动态| 欧美一区二区亚洲| 99久久九九国产精品国产免费| 国产69精品久久久久777片| 亚洲一级一片aⅴ在线观看| 亚洲欧美精品自产自拍| videos熟女内射| a级毛色黄片| 一级毛片aaaaaa免费看小| 午夜福利在线在线| 一级毛片我不卡| 免费一级毛片在线播放高清视频| 免费观看a级毛片全部| videossex国产| 夜夜爽夜夜爽视频| 国产不卡一卡二| 性色avwww在线观看| 成人欧美大片| 夜夜爽夜夜爽视频| 91久久精品国产一区二区三区| 能在线免费观看的黄片| 少妇被粗大猛烈的视频| 亚洲国产日韩欧美精品在线观看| 中文字幕人妻熟人妻熟丝袜美| 2021少妇久久久久久久久久久| av视频在线观看入口| 中文精品一卡2卡3卡4更新| 九九热线精品视视频播放| 亚洲国产精品sss在线观看| 亚洲丝袜综合中文字幕| 国语对白做爰xxxⅹ性视频网站| 波多野结衣高清无吗| www.av在线官网国产| 亚洲成av人片在线播放无| 嘟嘟电影网在线观看| 99热这里只有精品一区| 精品人妻偷拍中文字幕| 好男人视频免费观看在线| a级一级毛片免费在线观看| 日本与韩国留学比较| 国模一区二区三区四区视频| 91狼人影院| av免费在线看不卡| 免费看av在线观看网站| 国产极品精品免费视频能看的| 国产欧美另类精品又又久久亚洲欧美| 欧美成人精品欧美一级黄| 亚洲中文字幕日韩| 狂野欧美激情性xxxx在线观看| 欧美高清性xxxxhd video| 纵有疾风起免费观看全集完整版 | 91午夜精品亚洲一区二区三区| 久久韩国三级中文字幕| 老师上课跳d突然被开到最大视频| 亚洲精品久久久久久婷婷小说 | 亚洲无线观看免费| 国产男人的电影天堂91| 内地一区二区视频在线| 国产亚洲5aaaaa淫片| 日韩高清综合在线| 日韩,欧美,国产一区二区三区 | 欧美人与善性xxx| 日韩欧美三级三区| 成人亚洲精品av一区二区| 身体一侧抽搐| 午夜福利在线观看吧| 人人妻人人看人人澡| 日本wwww免费看| 精品无人区乱码1区二区| 欧美不卡视频在线免费观看| 狂野欧美激情性xxxx在线观看| 日韩欧美国产在线观看| 少妇被粗大猛烈的视频| 男女边吃奶边做爰视频| av国产免费在线观看| 最近2019中文字幕mv第一页| 在线免费观看的www视频| 伦理电影大哥的女人| 伦理电影大哥的女人| 99久久九九国产精品国产免费| 欧美人与善性xxx| 全区人妻精品视频| 成人毛片a级毛片在线播放| 纵有疾风起免费观看全集完整版 | 精品酒店卫生间| 性插视频无遮挡在线免费观看| 2022亚洲国产成人精品| 久久精品国产亚洲av涩爱| 午夜免费激情av| 免费黄色在线免费观看| 2021少妇久久久久久久久久久| 亚洲高清免费不卡视频| 日本黄大片高清| 1000部很黄的大片| 欧美日韩精品成人综合77777| 国产成人精品婷婷| 久久午夜福利片| 久久人人爽人人片av| 黄色配什么色好看| 国产免费男女视频| av线在线观看网站| 色尼玛亚洲综合影院| 久久精品夜夜夜夜夜久久蜜豆| 精品久久久久久久久av| 久久久色成人| 内射极品少妇av片p| 久久久久久久久久黄片| 亚洲国产最新在线播放| 国产91av在线免费观看| 人妻夜夜爽99麻豆av| 免费观看人在逋| 国产91av在线免费观看| 免费av不卡在线播放| 日日啪夜夜撸| 激情 狠狠 欧美| 日本欧美国产在线视频| 日日摸夜夜添夜夜爱| 成年免费大片在线观看| 欧美一级a爱片免费观看看| 网址你懂的国产日韩在线| 国产精品国产三级专区第一集| 天堂√8在线中文| 大香蕉97超碰在线| 久久久久久久亚洲中文字幕| 成人美女网站在线观看视频| 精品一区二区三区人妻视频| a级毛片免费高清观看在线播放| 18禁动态无遮挡网站| 三级男女做爰猛烈吃奶摸视频| 99在线人妻在线中文字幕| 热99re8久久精品国产| 国产伦一二天堂av在线观看| 色噜噜av男人的天堂激情| 一级二级三级毛片免费看| 国产精品无大码| 欧美日韩精品成人综合77777| 国产精品国产三级国产av玫瑰| 日韩成人av中文字幕在线观看| 直男gayav资源| ponron亚洲| 国产精品日韩av在线免费观看| 成人亚洲精品av一区二区| 亚洲精品aⅴ在线观看| 日韩成人av中文字幕在线观看| 18+在线观看网站| 国产成人精品婷婷| 亚洲丝袜综合中文字幕| 精品久久久久久久久av| 亚洲av中文字字幕乱码综合| 国产视频首页在线观看| 亚洲欧美成人精品一区二区| 激情 狠狠 欧美| 麻豆一二三区av精品| 国产高清视频在线观看网站| 天堂网av新在线| 高清日韩中文字幕在线| 国产又黄又爽又无遮挡在线| 国产精品国产三级国产av玫瑰| 高清在线视频一区二区三区 | 日韩欧美精品免费久久| 在线观看66精品国产| 卡戴珊不雅视频在线播放| 久久久久免费精品人妻一区二区| 激情 狠狠 欧美| kizo精华| 青青草视频在线视频观看| 久久久欧美国产精品| 成人特级av手机在线观看| 亚洲av中文字字幕乱码综合| 国产高清国产精品国产三级 | 亚洲国产精品成人综合色| 欧美人与善性xxx| 我要搜黄色片| 国产毛片a区久久久久| 我要搜黄色片| 国产精品乱码一区二三区的特点| 最近中文字幕高清免费大全6| 国产精品不卡视频一区二区| 日韩成人av中文字幕在线观看| 久久久久久久久久成人| 91在线精品国自产拍蜜月| 亚洲国产日韩欧美精品在线观看| 青春草亚洲视频在线观看| 欧美高清成人免费视频www| 男女边吃奶边做爰视频| 91精品一卡2卡3卡4卡| 噜噜噜噜噜久久久久久91| 日韩在线高清观看一区二区三区| 偷拍熟女少妇极品色| 国产午夜精品论理片| 偷拍熟女少妇极品色| 国产成人一区二区在线| 免费观看性生交大片5| 少妇的逼水好多| 欧美性猛交黑人性爽| 免费av观看视频| 舔av片在线| 久久久久免费精品人妻一区二区| 免费观看人在逋| 国产成人freesex在线| 嫩草影院精品99| 高清视频免费观看一区二区 | 国产精品久久电影中文字幕| 久久久久精品久久久久真实原创| 在线免费观看不下载黄p国产| 插逼视频在线观看| 在线观看美女被高潮喷水网站| 日日撸夜夜添| 美女xxoo啪啪120秒动态图| 久久久国产成人精品二区| 国语自产精品视频在线第100页| 男人舔女人下体高潮全视频| 午夜福利高清视频| 日日摸夜夜添夜夜爱| 国产成人精品一,二区| av.在线天堂| 搡老妇女老女人老熟妇| 欧美一级a爱片免费观看看| 婷婷色综合大香蕉| 国产老妇伦熟女老妇高清| 欧美激情久久久久久爽电影| 97在线视频观看| 日本一本二区三区精品| 日本与韩国留学比较| 1024手机看黄色片| 亚洲av不卡在线观看| 亚洲国产色片| av福利片在线观看| 亚洲精品国产成人久久av| 亚洲av不卡在线观看| 日日啪夜夜撸| 日韩av在线大香蕉| 日韩欧美在线乱码| 国产精品,欧美在线| 一级av片app| 亚洲av中文av极速乱| 在现免费观看毛片| 高清视频免费观看一区二区 | 久久精品久久精品一区二区三区| 黄色日韩在线| 久久久色成人| 免费看a级黄色片| 中文字幕制服av| 3wmmmm亚洲av在线观看| 观看免费一级毛片| АⅤ资源中文在线天堂| 成人欧美大片| 搡老妇女老女人老熟妇| 国产老妇女一区| 毛片女人毛片| 波多野结衣高清无吗| 国产熟女欧美一区二区| 99九九线精品视频在线观看视频| 成人美女网站在线观看视频| 村上凉子中文字幕在线| 国产视频内射| 午夜福利高清视频| 日本欧美国产在线视频| 夫妻性生交免费视频一级片| 一级毛片久久久久久久久女| 国产 一区 欧美 日韩| 男人舔女人下体高潮全视频| 男人狂女人下面高潮的视频| 国产精品一区二区三区四区久久| 九九久久精品国产亚洲av麻豆| 一个人观看的视频www高清免费观看| 人人妻人人看人人澡| 一级毛片电影观看 | 亚洲av中文字字幕乱码综合| 美女黄网站色视频| 国产精品一二三区在线看| 青春草视频在线免费观看| 黄片wwwwww| 欧美3d第一页| 麻豆久久精品国产亚洲av| 最近手机中文字幕大全| .国产精品久久| 久久久久久国产a免费观看| 国产探花在线观看一区二区| 日本午夜av视频| 久久久国产成人精品二区| 伦精品一区二区三区| 又黄又爽又刺激的免费视频.| 在线天堂最新版资源| 噜噜噜噜噜久久久久久91| 国产精品永久免费网站| av国产免费在线观看| 成人鲁丝片一二三区免费| 日韩av在线免费看完整版不卡| 性色avwww在线观看| 黄色日韩在线| 天天躁日日操中文字幕| 亚洲最大成人中文| 观看美女的网站| 国产精品人妻久久久久久| 一级二级三级毛片免费看| 国产精品一区二区在线观看99 | 91在线精品国自产拍蜜月| 熟女电影av网| a级毛色黄片| 久久久成人免费电影| 可以在线观看毛片的网站| 亚洲av一区综合| 男人狂女人下面高潮的视频| 最近中文字幕2019免费版| 一级二级三级毛片免费看| 成人特级av手机在线观看| 老师上课跳d突然被开到最大视频| 美女黄网站色视频| 听说在线观看完整版免费高清| 啦啦啦韩国在线观看视频| 日韩制服骚丝袜av| 成年av动漫网址| 九九久久精品国产亚洲av麻豆| 成人鲁丝片一二三区免费| 免费观看a级毛片全部| 亚洲人与动物交配视频| 身体一侧抽搐| 狂野欧美白嫩少妇大欣赏| 蜜臀久久99精品久久宅男| 成人二区视频| 久久精品夜夜夜夜夜久久蜜豆| 在线免费观看不下载黄p国产| 国产熟女欧美一区二区| 一个人免费在线观看电影| 亚洲国产精品国产精品| 国产伦精品一区二区三区视频9| 热99re8久久精品国产| 91久久精品电影网| 精品无人区乱码1区二区| 国内少妇人妻偷人精品xxx网站| 成人午夜高清在线视频| 国产亚洲精品av在线| 超碰av人人做人人爽久久| 91精品一卡2卡3卡4卡| 男女国产视频网站| 国产免费又黄又爽又色| 中文欧美无线码| ponron亚洲| 一区二区三区乱码不卡18| 一个人看视频在线观看www免费| 日韩强制内射视频| 成人综合一区亚洲| 丰满人妻一区二区三区视频av| 尤物成人国产欧美一区二区三区| 亚洲伊人久久精品综合 | 免费一级毛片在线播放高清视频| 一级毛片电影观看 | 欧美日韩精品成人综合77777| 99热这里只有是精品在线观看| 久久精品久久久久久噜噜老黄 | 国产精品女同一区二区软件| 免费观看在线日韩| 久热久热在线精品观看| 99久久精品热视频| 成年av动漫网址| 亚洲欧美成人综合另类久久久 | 卡戴珊不雅视频在线播放| 一个人看的www免费观看视频| or卡值多少钱| 国产伦在线观看视频一区| 偷拍熟女少妇极品色| 日韩,欧美,国产一区二区三区 | 国产午夜精品一二区理论片| 狂野欧美激情性xxxx在线观看| 欧美+日韩+精品| 一区二区三区四区激情视频| 成年版毛片免费区| 国模一区二区三区四区视频| 亚洲国产高清在线一区二区三| 国产精品熟女久久久久浪| 91久久精品电影网| 久久精品夜色国产| 日本色播在线视频| 欧美97在线视频| 久久6这里有精品| 久久精品人妻少妇| 中文在线观看免费www的网站| 在线观看美女被高潮喷水网站| 麻豆久久精品国产亚洲av| 精品久久久久久成人av| 日韩强制内射视频| 欧美日韩一区二区视频在线观看视频在线 | 成人亚洲欧美一区二区av| 国产免费视频播放在线视频 | 午夜老司机福利剧场| 特级一级黄色大片| 国产成人福利小说| 可以在线观看毛片的网站| 国产精品爽爽va在线观看网站| www.av在线官网国产| 久久久久九九精品影院| 身体一侧抽搐| 国产中年淑女户外野战色| 精品人妻视频免费看| 亚洲久久久久久中文字幕| 亚洲综合色惰| 国产精品野战在线观看| 岛国在线免费视频观看| 欧美三级亚洲精品| a级毛色黄片| 国产成人91sexporn| 国产综合懂色| 成人漫画全彩无遮挡| 黄色日韩在线| 最近的中文字幕免费完整| 免费播放大片免费观看视频在线观看 | 波多野结衣巨乳人妻| 91精品伊人久久大香线蕉| 成人毛片a级毛片在线播放| 亚洲欧美精品综合久久99| 国产精品人妻久久久影院| 欧美人与善性xxx| 毛片女人毛片| 深夜a级毛片| 久久久精品大字幕| 一夜夜www| 我的老师免费观看完整版| 国产精品久久久久久久久免| 国产高清视频在线观看网站| 内射极品少妇av片p| 国产精品熟女久久久久浪| 精品人妻视频免费看|