• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Stable Label-Specific Features Generation for Multi-Label Learning via Mixture-Based Clustering Ensemble

    2022-07-18 06:17:02YiBoWangJunYiHangandMinLingZhang
    IEEE/CAA Journal of Automatica Sinica 2022年7期

    Yi-Bo Wang, Jun-Yi Hang, and Min-Ling Zhang,,

    Abstract—Multi-label learning deals with objects associated with multiple class labels, and aims to induce a predictive model which can assign a set of relevant class labels for an unseen instance. Since each class might possess its own characteristics,the strategy of extracting label-specific features has been widely employed to improve the discrimination process in multi-label learning, where the predictive model is induced based on tailored features specific to each class label instead of the identical instance representations. As a representative approach, LIFT generates label-specific features by conducting clustering analysis. However,its performance may be degraded due to the inherent instability of the single clustering algorithm. To improve this, a novel multilabel learning approach named SENCE (stable label-Specific features gENeration for multi-label learning via mixture-based Clustering Ensemble) is proposed, which stabilizes the generation process of label-specific features via clustering ensemble techniques. Specifically, more stable clustering results are obtained by firstly augmenting the original instance representation with cluster assignments from base clusters and then fitting a mixture model via the expectation-maximization (EM)algorithm. Extensive experiments on eighteen benchmark data sets show that SENCE performs better than LIFT and other wellestablished multi-label learning algorithms.

    I. INTRODUCTION

    MULTI-LABEL learning aims to build classification models for objects assigned with multiple semantics simultaneously, where each example is represented by a single instance and a set of relevant class labels [1]. As multi-label objects widely exist in the real world, multi-label learning has diverse applications, such as text categorization [2], image annotation [3], web mining [4], and bioinformatics analysis[5], etc.

    In recent years, a significant amount of algorithms have been proposed for multi-label learning. One common strategy adopted by most existing approaches is to build a predictive model based on the identical instance representations for each class label [1]. However, this strategy might be suboptimal as each class label is supposed to have distinct characteristics of its own. For instance, in text categorization, features corresponding to the word terms voting, reform and government would be informative in discriminating political and non-political documents, while features related to the word terms piano, Mozart and sonata would be informative in discriminating musical and non-musical documents. Therefore, the strategy of label-specific features [6] has been proposed to benefit the discrimination of different class labels.

    As a representative approach for label-specific features,LIFT [6] utilizes clustering techniques to investigate the underlying properties of the feature space for each class label.Nevertheless, the clustering in LIFT tends to be unstable due to the inherent instability of the single clustering method [7].To address this, clustering ensemble techniques [8]–[10] can be utilized to obtain clustering results with stronger stability and robustness. With the assumption that the clustering results of related labels should be similar, LIFTACE [8] employs clustering ensemble techniques to integrate the preliminary clustering results of all class labels based on the consensus similarity matrix. However, it fails to utilize the information embodied in the original data representation during the combination process of clustering ensemble.

    To address aforementioned issues, a novel approach named SENCE, i.e., stable label-Specific features gENeration for multi-label learning via mixture-based Clustering Ensemble,is proposed, which stabilizes the clustering process via a twostage method. Firstly, several base clusters are exploited to conduct clustering analysis on positive and negative instances of each class label. Then, base cluster assignments are combined via a tailored expectation-maximization (EM)procedure, where a mixture model is fitted on clusteringaugmented instances. After that, a predictive model is induced based on the label-specific features derived from the improved generation strategy.

    In this paper, we advance the label-specific feature generation via a novel clustering combination strategy, which is an essential step in clustering ensemble. The novel strategy can fully leverage the information hidden in the original data representation and encoded in each cluster assignment to avoid suboptimal results of existing techniques. Comprehensive experiments over 18 benchmark data sets indicate the effectiveness of SENCE.

    The rest of this paper is organized as follows. Section II briefly reviews related works on multi-label learning. Section III presents the proposed approach SENCE. Section IV reports the experimental results on 18 benchmark datasets. Finally,Section V concludes.

    II. RELATED WORKS

    The task of multi-label learning has been extensively studied in recent years. Generally, the major challenge for multi-label learning is its huge output space which is exponentially related to the number of class labels. Therefore,exploiting label correlations is regarded as a common strategy to facilitate the learning process. Roughly speaking, existing approaches can be grouped into three categories based on the order of correlations [1], [11], i.e., first-order approaches,second-order approaches and high-order approaches. Firstorder approaches tackle multi-label learning problem in a label-by-label manner [3], [12]. Second-order approaches exploit pairwise relationships between class labels [13], [14].High-order approaches exploit relationships among a subset of class labels or all class labels [15]–[17].

    In addition to exploiting label correlations in the output space, another strategy for facilitating multi-label learning is to manipulate the input space. The most straightforward feature manipulation strategy is to conduct dimensionality reduction [18]–[20] or feature selection [21]–[24], which is also a common strategy used in multi-class learning, over the original feature space. Besides, there are also some other methods, such as generating meta-level features [25], [26]with strong discriminative information from the original representation, constructing multi-view representations for multi-label data [27]–[29], etc. Note that all these feature manipulation strategies employ identical feature representation for all labels in the discrimination process.

    Instead, label-specific feature generation serves as an alternative feature manipulation strategy, which extracts the most discriminative features for each individual label. Some works generate label-specific features by selecting a different subset of the original features for each class label [30]–[33].Based on the sparse assumption, the most pertinent and discriminative features for each label can be identified using spectral clustering and least absolute shrinkage and selection operator (LASSO) algorithms [34].

    In addition to conducting label-specific feature selection in the original feature space, it is also feasible to derive labelspecific features from a transformed feature space. For example, LIFT [6] performs clustering analysis on the positive and negative instances of each class label, and generates labelspecific features by querying the distances between the instance and the clustering centers. To improve this, attribute reduction [35] can be employed in the process of label-specific features construction to remove redundant information in generated label-specific features. Some other works aim to enrich label-specific features by exploiting the nearest neighbor rule [36], exploring spatial topology structures [37],jointly considering label-specific features generation and classification model induction [38], generating BiLabelspecific features based on heuristic prototype selection and embedding [39], or imposing structured sparsity regularization over the label-specific features [40].

    Recently, clustering ensemble techniques have been considered to enhance the process of label-specific feature generation. However, off-the-shelf clustering ensemble techniques employed in previous methods fail to utilize the information embodied in the original data representation [8],[41]. In this paper, we propose a novel clustering ensemble strategy for label-specific feature generation, where the information hidden in the original data representation and encoded in each cluster assignment is taken into consideration simultaneously to facilitate the generation of more stable clustering. We will detail our approach in the next section.

    III. THE PROPOSED APPROACH

    A. Preliminaries

    Formally, let X=Rddenote thed-dimensional input space and у={l1,l2,...,lq} denote the label space includingqclass labels. Given the multi-label training set D={(xi,Yi)|1 ≤i≤m} wherexi=[xi1,xi2,...,xid]T∈X is thed-dimensional feature vector andYi?у is the set of relevant labels associated withxi, the task of multi-label learning is to induce a predictive modelh:X →2уfrom D which can assign a set of relevant labelsh(u)?у for an unseen instanceu∈X.Specifically, LIFT learns from D by taking two steps, i.e.,label-specific features construction and predictive model induction.

    In the first step, for each class labellk∈у, instances are divided into positive set and negative set as follows:

    Here,d(·,·) returns the Euclidean distance between two feature vectors.

    In the second step, a new binary training set Bkis constructed from the original training set D according to the label-specific features generated by the mapping?k

    whereYi(k)=+1 iflk∈YiandYi(k)=?1 otherwise. Based on Bk, a classification modelgk:Zk→R forlkis induced by invoking any binary learner L. Given an unseen instanceu∈X, its relevant label set is predicted as

    B. SENCE

    SENCE learns from D by taking four elementary stages,which aims to induce a multi-label classification model with the generated label-specific features. The first two stages are designed to stabilize the clustering process via clustering ensemble techniques. Specifically, the first stage augments the original instance representations based on cluster assignments from base clusters. The second stage fits a mixture model on augmented instances via the EM algorithm to obtain more stable clustering results. The third stage constructs labelspecific features, and the fourth stage induces predictive models, which are consistent with the corresponding stages in LIFT. To facilitate understanding, the notations set in SENCE are summarized in Table I.

    TABLE I THE SET OF NOTATIONS FOR SENCE

    1)Clustering-Based Feature Augmentation:For each class labellk, SENCE divides instances into the positive set and negative set donated as Pkand Nkrespectively according to(1). To mitigate the inherent instability of the single clustering method, in contrast to LIFT, SENCE employs multiple base clusters on Pkand Nkto derive cluster assignments and rerepresents Pkand Nkas follows:

    Here,ti=()is a clusterassignmentvector,whereristhe number ofbaseclusters andthepthelementindicates the cluster assignment given by thepth base cluster. The cluster assignment vectortiis regarded as extra features to augment the original instancexi. Thus, such feature representation of instances inandcan fully encode the information embodied in the original data representation and the cluster assignments, which makes the following labelspecific feature extraction more stable and robust.

    2)Clustering Combination via A Mixture Model:Existing clustering ensemble methods work in two steps, i.e., clustering generation and clustering combination. In the clustering generation step, similar to existing clustering ensemble methods, SENCE exploits several base clusters to conduct clustering analysis on positive and negative instances of each class label. As the original features and the augmented features are generated in different ways, existing clustering combination methods might be suboptimal. Thus, in the clustering combination step, instead of directly combining base cluster assignments as existing clustering ensemble methods do, SENCE innovatively performs another clustering analysis on augmented instances which treat the original features and the augmented features in different ways. This novel clustering combination strategy can leverage the information hidden in the original data representation and encoded in each cluster assignment to facilitate the generation of more stable clustering.

    wheremkis the number of mixture components which also corresponds to the number of clusters in the final ensemble clustering. Each mixture component is parameterized byθjwhile αj>0 is regarded as the mixing coefficient correspondingtothepriorprobabilityofeachclusters.In addition,αj=1. Notethatrandomvariablesxiandtiare assumed to be conditionally independent to make the problem tractable. This assumption is reasonable sincetidescribes the inherent structure of the whole training set, which is relatively immune to a certain data pointxi.

    In this paper, the instancexiis modeled as a random variable drawn from a marginal distribution described as a mixture of Gaussian distributions according to (6), i.e.,

    Here, each mixture component is parameterized by μjand Σj, where μjand Σjare thed-dimensional mean vector and the covariance matrix for each mixture component respectively.

    Similarly, the cluster assignment vectortiis modeled as a random variable drawn from a marginal distribution described as a mixture of multinomial distributions according to (6), i.e.,

    Here, each mixture component is parameterized by ?j.Assume that the elements of the cluster assignment vectortiare conditionally independent, then

    wherek(p)is the number of clusters in thepth base cluster. In addition, δ(,l) is the Kroneckerδfunction which returns 1 ifis equal toland 0 otherwise. The probability of the instance belonging to thelth cluster is defined asvpj(l) with

    Based on the above assumptions, the problem of the clustering combination is now transformed into a maximum likelihood estimation problem. The optimal parameterΘ?w.r.t.is found by maximizing the log-likelihood function as follows:

    The optimal parameter Θ?w.r.t.is found in the same way.However, as all the parametersΘ={αj,μj,Σj,?j|1 ≤j≤mk}are unknown, the problem in (10) cannot generally be solved in a closed form. Thus, the EM algorithm is used to optimize (10). In order to perform the EM algorithm, the hidden variablezi∈{1,2,...,mk} is introduced to represent the corresponding mixture component generating [xi,ti], i.e.,zi=jif [xi,ti] belongs to thejth mixture component.According to the Bayes’ theorem, theE-step of the EM algorithm can be obtained by estimating the posterior distribution of the hidden variablezias follows:

    In other words, γijgives the posterior probability that [xi,ti]is drawn from thejth mixture component. Given the value of γijfrom theE-step, theM-step aims to maximize the loglikelihood functionL(). The mean vector μjand the covariance matrix Σjare derived as follows:

    Similarly, the optimal value ofvpj(l) is obtained as follows:

    In summary, for each iteration, theE-step estimates the posterior distribution of the hidden variableziaccording to the current parameters while theM-step updates the optimal values of all parameters according to (12)?(15).

    3)Label-Specific Features Construction:According to the inducedmixingdistribution on,Pkisdivided intomkdisjointclusters donated as {}. Thefinal clusterassignment of each instancein Pkcan be definedas follows:

    Here, ? ∈ [0, 1] is a ratio parameter controlling the number of clusters Pkand Nkretained, and |·| returns the set cardinality.

    Conceptually, cluster centers characterize the inherent structure of the positive and negative instances. Thus,clustering centers can be used as prototypes to construct labelspecific features which are derived from more stable clustering. Similar to LIFT, the mapping ?k:X →Zkcan be created according to (2).

    4)Predictive Model Induction:Similar to LIFT, SENCE transforms the training set D into a new binary training setBkfor each class label according to (3). Any binary learner L can be applied to induce a classification modelgk:Zk→R forlkbased on Bk. After that, an associated label set is predicted for an unseen exampleu∈X according to (4).

    Algorithm 1 summarizes the procedure of SENCE. SENCE firstly performs clustering several times to re-represent instances for each label (Steps 2?4); After that, the EM algorithm is used to yield more stable clustering (Steps 5?15)and label-specific features are constructed for each class label(Step 16); Then, a family ofqbinary classification models are induced based on the constructed label-specific features (Steps 18?21); Finally, an unseen instance is fed to the learned models for predicting the relevant labels (Step 22).

    Algorithm 1 The Pseudo-Code of SENCE

    IV. EXPERIMENTS

    A. Experimental Setup

    Given the multi-label data set S={(xi,Yi)|1 ≤i≤m}, |S|,dim(S) andL(S) denote the number of examples, number of features and number of possible class labels, respectively. In addition, several other multi-label properties [1], [15] are de noted as

    ?LCard(S)=(1/m)|Yi|:Label cardinality measures the average number of labels per example;

    ?LDen(S)=LCard(S)/L(S):Label density normalizesLCard(S)by the number of possible labels;

    ?DL(S)=|{Y|(x,Y)∈S}|:Distinct label sets counts the number of distinct label sets existing in S;

    ?PDL(S)=DL(S)/|S|:Proportion of distinct label sets normalizesDL(S) by the number of examples.

    Table II summarizes the detailed characteristics of the benchmark multi-label data sets employed in the experiments.Data sets shown in Table II are roughly ordered by | S|. The 18 benchmark data sets exhibit diversified multi-label properties which provide a solid basis for thorough performance evaluation.

    To validate the effectiveness of the proposed approach, six state-of-the-art multi-label learning approaches are used for comparative studies.

    ?LPLC [42]:A second-order multi-label learning approach which exploits the local positive and negative pairwise label correlations by maximizingk-nearest neighbor (kNN)-based posterior probability [k=10,α=0.1].

    ?LIFT [6]:A first-order multi-label learning approach,which induces classifiers with label-specific features generated via conducting clustering analysis for each class label[Base learner: Linear kernel support vector machine (SVM),r=0.1].

    ?LLSF [30]:A second-order multi-label learning approach based on label-specific features generated by retaining a different subset of original features for each class label[α=0.5,β=0.5,γ=0.5].

    ?MLSF [34]:A high-order multi-label learning approach based on label-specific features, which performs sparse regression to generate tailored features by retaining a different subset of original features for a group of class labels[?=0.01,α=0.8,γ=0.01].

    ?LIFTACE [8]:A high-order multi-label learning approach based on label-specific features generated by considering label correlations via clustering ensemble techniques [Base learner:linear kernel SVM,r=0.1, γ =10].

    ?WRAP [38]:A high-order multi-label learning approach which performs label-specific feature generation and classification model induction in a joint manner [λ1=0.5, λ2=0.5,λ3=0.1, α =0.9].

    For each comparing approach, parameter configurations suggested in respective literature are stated above. For SENCE shown in Algorithm 1, the parameter configuration corresponds to ?=0.4 andr=5. Moreover, LIBSVM [43] is employed as the binary learning algorithm L and thek-means algorithm is employed as the base clustering algorithm.

    In addition, given the test set T ={(xi,Yi)|1 ≤i≤t} and a family ofqlearned functions {f1,f2,...,fq}, six evaluation metrics [1] widely-used in multi-label learning are utilized in this paper to evaluate the performance of each comparing approach:

    ?Hamming loss:

    Hamming loss evaluates the fraction of instance-label pairs which are misclassified. Here,h(xi)={lk|fk(xi)>0,1 ≤k≤q} corresponds to the predicted set of relevant labels forxi,and △ stands for the symmetric difference between two sets.

    TABLE II CHARACTERISTICS OF THE EXPERIMENTAL DATA SETS

    ?Ranking loss:

    Ranking loss evaluates the fraction of relevant-irrelevant label pairs which are reversely ordered. Here,is the complementary set ofYi?у.

    ?One-error:

    One-error evaluates the fraction of examples whose topranked predicted label is not in the ground-truth relevant label set. Here, ? π? returns 1 if predicateπholds and 0 otherwise.

    ?Coverage:

    Coverage evaluates the average number of steps needed to move downtherankedlabel list inorderto coverallrelevant labels. Here,rank(xi,lk)=?fj(xi)≥fk(xi)?returnsthe rank oflkwhen all class labels in у are sorted in descending order according to {f1(xi),f2(xi),...,fq(xi)}.

    ?Average precision:

    Average precision evaluates the average fraction of relevant labels which rank higher than a particular relevant label. Here,R(xi,lk)={lj|rank(xi,lj)≤rank(xi,lk),lj∈Yi}.

    ?Macro-averaging AUC:

    Macro-averaging AUC evaluates the average AUC value across all class labels.

    B. Experimental Results

    Ten-fold cross-validation is performed on each benchmark data set, where the mean metric value as well as standard deviation are recorded. Tables III and IV report detailed experimental results in terms of each evaluation metric where the best performance on each data set is shown in boldface.

    In addition, the widely-accepted Friedman test [44] is employed here for statistical comparisons of multiple algorithms over a number of data sets. Table V summarizes the Friedman statisticsFFand the corresponding critical values on each evaluation metric at α=0.05 significance level. As shown in Table V, the null hypothesis of “equal”performance among comparing approaches should be clearly rejected in terms of each evaluation metric.

    Therefore, the Bonferroni-Dunn test [45] is employed as the post-hoc test [44] to analyze the relative performance among comparing approaches where SENCE is treated as the control approach. Here, the difference between the average ranks of SENCE and one comparing approach is calibrated with the critical difference (CD). Their performance difference is deemed to be significant if the average ranks of SENCE and one comparing algorithm differ by at least one CD. In this paper, we have CD = 1.8996 at significance level α=0.05 ask=7 andN=18.

    TABLE III EXPERIMENTAL RESULTS OF THE COMPARING APPROACHES ON THE FIRST NINE DATA SETS(↓: THE SMALLER THE BETTER; ↑: THE LARGER THE BETTER)

    TABLE III EXPERIMENTAL RESULTS OF THE COMPARING APPROACHES ON THE FIRST NINE DATA SETS(↓: THE SMALLER THE BETTER; ↑: THE LARGER THE BETTER) (CONTINUED)

    TABLE IV EXPERIMENTAL RESULTS OF THE COMPARING APPROACHES ON THE OTHER NINE DATA SETS(↓: THE SMALLER THE BETTER; ↑: THE LARGER THE BETTER)

    TABLE IV EXPERIMENTAL RESULTS OF THE COMPARING APPROACHES ON THE OTHER NINE DATA SETS(↓: THE SMALLER THE BETTER; ↑: THE LARGER THE BETTER) (CONTINUED)

    TABLE V FRIEDMANSTATISTICS FFINTERMSOF EACHEVALUATION METRIC AS WELLAS THE CRITICALVALUEAT 0 .05SIGNIFICANCE LEVEL(#COMPARINGAPPROACHESn=7, #DATASETS N=18)

    Based on the reported experimental results, the following observations can be made:

    ? As shown in Fig. 1, it is impressive that SENCE achieves the lowest rank in terms of all evaluation metrics except macro-averaging AUC. Furthermore, all comparing approaches except LPLC and WRAP achieve statistically comparable performance in terms of macro-averaging AUC.

    ? Compared with approaches without label-specific features,SENCE significantly outperforms LPLC in terms of all evaluation metrics. These results clearly indicate the effectiveness of constructed label-specific features for multilabel label learning.

    ? Among approaches with label-specific features, SENCE significantly outperforms LLSF, MLSF and WRAP in terms of ranking loss and coverage. SENCE is comparable to LIFT in terms of all evaluation metrics. Furthermore, pairwisettests at 0.05 significance level show that SENCE achieves superior or at least comparable performance to LIFT in 97.2%cases out of 108 cases (18 data sets × 6 evaluation metrics).These results clearly indicate our proposed clustering ensemble-based strategy for label-specific features serves a more effective way in achieving stable clustering and strong generalization performance.

    ? SENCE is comparable to LIFTACE in terms of all evaluation metrics. Further pairwiset-tests at 0.05 significance level show that SENCE achieves superior or at least comparable performance to LIFTACE in 96.3% cases out of 108 cases (18 data sets × 6 evaluation metrics). These results clearly validate the effectiveness of the proposed clustering ensemble strategy employed in SENCE, as both SENCE and LIFTACE utilize clustering ensemble to facilitate the label-specific features construction.

    All metric values are normalized in [0, 1], where for the first four metrics the smaller the metric value the better the performance and for the other two metrics the larger the metric value the better the performance.

    C. Further Analysis

    1)Parameter Sensitivity:As shown in Algorithm 1, there are two parameters for SENCE to be tuned, i.e. the number of base clustersrand the ratio parameter?. Fig. 2 illustrates how the performance of SENCE changes with varying parameter configurations ? ∈{0.1,0.2,...,1} andr∈{1,2,...,10} on three benchmark data sets (evaluation metrics: hamming loss and ranking loss). As shown in Fig. 2, the performance of SENCE is relatively stable as the value ofrincreases under the fixed value of?. On the other hand, the performance of SENCE becomes stable as the value of?increases beyond 0.4 under the fixed value ofr. Therefore, the value of?andris fixed to be 0.4 and 5 respectively for comparative studies in this paper.

    Fig. 1. Comparison of SENCE (control approach) against six comparing approaches with the Bonferroni-Dunn test. Approaches not connected with SENCE in the CD diagram are considered to have significantly different performance from the control approach (CD = 1.8996 at 0.05 significance level).

    Fig. 2. Performance of SENCE changes with varying parameter configurations ? ∈{0.1,0.2,...,1} and r ∈{1,2,...,10} (Data sets: Emotions, image, yeast;First row: Hamming loss, the smaller the better; Second row: Ranking loss, the smaller the better).

    2)Base Learner:Among the six comparing algorithms employed in Section IV-A, three of them are tailored towards concrete learning techniques. Specifically, LPLC is adapted from thek-nearest neighbor while LLSF and WRAP are adapted from linear regression. On the other hand, LIFT,LIFTACE and MLSF work in similar way as SENCE by transforming the multi-label learning problem so that any base learner can be applied thereafter. Considering that SENCE,LIFT, LIFTACE and MLSF rely on the choice of base leaner Lto instantiate learning approaches, Table VI reports their performance on 8 data sets instantiated with different choices of base learner L ( L ∈ {SVM,k-nearest neighbor (kNN),classification and regression tree (CART)}). As shown in Table VI, the following observations can be made: a) The choice of base learner has a significant influence on the performance of each algorithm; b) SENCE achieves superior or comparable performance compared to other algorithms in most cases with different base learners; c) SENCE tends to perform better when SVM is used as the base learner other thankNN and CART.

    3)Ablation Study:In training phase, SENCE employs multiple base clusters and a mixture model to yield the final clustering. To analyze the rationality of these components,ablation study on two variants of SENCE is further conducted in this subsection. Specifically, SENCEKemploysk-means to obtain clusteringresultsonaugmented instancesinsteadof a mixture model;SENCEGemploys onemixtureGaussian model to yield clustering results on original instance representations without feature augmentation.

    Table VIIreportsdetailed experimental results ofSENCE and its two variants SENCEK, SENCEGon8benchmarkdata sets. Compared with SENCEG,SENCE achieves statistically superior or comparable performance in all cases. These results clearly validate the usefulness of multiple base clusters which augment the original instance representations with cluster assignments. Compared with SENCEK, SENCE achieves statistically superior or comparable performance in all cases.These results clearly indicate that the mixture model might be more effective for integrating preliminary clustering results.

    TABLE VI EXPERIMENTALRESULTS OF THE COMPARINGAPPROACHES INSTANTIATED WITHDIFFERENTBASE LEARNERSL ( L ∈{SVM, K-NEAREST NEIGHBOR (KNN), CLASSIFICATION AND REGRESSION TREE (CART)}). IN ADDITION, ?/? INDICATESWHETHER THE PERFORMANCE OF SENCE ISSTATISTICALLY SUPERIOR/INFERIOR TOTHE COMPARING APPROACHES ON EACH DATASET (PAIRWISET-TEST AT0.05 SIGNIFICATE LEVEL)

    TABLE VII EXPERIMENTAL RESULTS OF SENCE AND ITS TWO ABLATED VARIANTS ON EIGHT DATA SETS. IN ADDITION, ?/? INDICATES WHETHER THE PERFORMANCE OF SENCE IS STATISTICALLY SUPERIOR/INFERIOR TO THE VARIANTS ON EACH DATA SET (PAIRWISE T-TEST AT 0.05 SIGNIFICATE LEVEL)

    4)Algorithmic Complexity:Let FL(m,b) be the training complexity of the binary learner L w.r.t.mtraining examples andb-dimensionalfeature(s.Thetraining complexity of SENCEcorre)s) ponds toOq(I(md2+r「?·m?2+「?·m?d3)+FL(m,「?·m?),whered3isderived from the covariance matrix inversion andIis the number of iterations. The testing co( m(plexityofSE NCE ove)r)u nseen instanceucorresponds to Oqd「?·m?+(「?·m?),whereFL(b)′is thetesting complexity of L in predicting one unseen instance withbdimensional features.

    Fig. 3 illustrates the execution time (training phase as well as testing phase) of all the comparing algorithms investigated in Section IV-A on five benchmark data sets emotions, enron,image, corel5k, and NUS-WIDE-c. Across the 5 data sets,their number of examples, features and class labels range from 593 to 10 000, 72 to 1001, and 5 to 374, respectively. The training time of SENCE is relatively comparable to the comparing approaches except LPLC and LLSF. Furthermore,the test time of SENCE is higher than LLSF and WRAP while relatively comparable to the other comparing approaches.Note that due to the cubic computational complexity of SENCE w.r.t.d(i.e., the number of features in input space),the proposed approach may have problems when applied to data sets with high-dimensionality features. We will leave this for future works.

    V. CONCLUSIONS

    Fig. 3. Running time (training/test) of each comparing approach on five benchmark data sets. For histogram illustration, the y-axis corresponds to the logarithm of running time.

    In this paper, the problem of generating label-specific features for multi-label learning is investigated. A novel approach for label-specific features generation is proposed,which stabilizes the generation process of the label-specific features via clustering ensemble techniques. Specifically, the final clustering used to construct label-specific features is obtained by fitting a mixture model on instances augmented with base cluster assignments via the EM algorithm.Comprehensive experimental studies validate the effectiveness of the proposed approach against state-of-the-art multilabel learning algorithms. In the future, it is interesting to consider generating label-specific features by exploiting label correlations based on the proposed SENCE and investigate a more general joint distribution by taking dependency of the original instance and corresponding cluster assignment vector into account.

    ACKNOWLEDGMENTS

    We thank the Big Data Center of Southeast University for providing the facility support on the numerical calculations in this paper.

    9191精品国产免费久久| 十分钟在线观看高清视频www| 欧美日韩一区二区视频在线观看视频在线| 亚洲成人国产一区在线观看 | 少妇 在线观看| 欧美国产精品一级二级三级| 99精品久久久久人妻精品| 高清黄色对白视频在线免费看| av视频免费观看在线观看| 欧美日韩亚洲高清精品| 男人爽女人下面视频在线观看| 黄色怎么调成土黄色| 欧美av亚洲av综合av国产av | 美女高潮到喷水免费观看| 男人添女人高潮全过程视频| 国产亚洲一区二区精品| 老司机深夜福利视频在线观看 | 欧美黑人欧美精品刺激| 日本欧美视频一区| www日本在线高清视频| 欧美少妇被猛烈插入视频| 色婷婷av一区二区三区视频| 欧美精品一区二区免费开放| 国产伦人伦偷精品视频| 久久久久精品人妻al黑| 日韩制服丝袜自拍偷拍| 老司机靠b影院| 一级a爱视频在线免费观看| 亚洲国产精品成人久久小说| 国产精品久久久久成人av| 国产免费视频播放在线视频| 久久精品久久精品一区二区三区| 精品一区二区三区av网在线观看 | 欧美日韩综合久久久久久| 精品人妻在线不人妻| 国产日韩欧美亚洲二区| 亚洲专区中文字幕在线 | 老鸭窝网址在线观看| 人体艺术视频欧美日本| 成人亚洲精品一区在线观看| 久久亚洲国产成人精品v| 欧美黑人精品巨大| 精品亚洲成国产av| 亚洲av成人不卡在线观看播放网 | 最新的欧美精品一区二区| 欧美日韩一区二区视频在线观看视频在线| 亚洲国产日韩一区二区| 尾随美女入室| 日韩欧美一区视频在线观看| 秋霞伦理黄片| 国产精品国产av在线观看| 无遮挡黄片免费观看| 女人高潮潮喷娇喘18禁视频| 精品亚洲成国产av| 国产精品麻豆人妻色哟哟久久| 国产欧美日韩一区二区三区在线| 九色亚洲精品在线播放| 亚洲欧美中文字幕日韩二区| 久久韩国三级中文字幕| 亚洲成人手机| 99久国产av精品国产电影| 超碰成人久久| 欧美国产精品va在线观看不卡| 超碰成人久久| 国产在视频线精品| 一区二区三区四区激情视频| 精品少妇久久久久久888优播| 男女国产视频网站| 爱豆传媒免费全集在线观看| 欧美日韩亚洲国产一区二区在线观看 | 亚洲av日韩精品久久久久久密 | av片东京热男人的天堂| av免费观看日本| 日本一区二区免费在线视频| 波多野结衣一区麻豆| 美女主播在线视频| 亚洲精品,欧美精品| 这个男人来自地球电影免费观看 | 一本久久精品| 国语对白做爰xxxⅹ性视频网站| 精品卡一卡二卡四卡免费| 久久 成人 亚洲| 国产片特级美女逼逼视频| 老司机靠b影院| 欧美成人精品欧美一级黄| 精品国产超薄肉色丝袜足j| 亚洲精品在线美女| 黑人巨大精品欧美一区二区蜜桃| 国产深夜福利视频在线观看| 熟妇人妻不卡中文字幕| 伊人久久大香线蕉亚洲五| 欧美亚洲日本最大视频资源| 韩国av在线不卡| a 毛片基地| 国产日韩欧美视频二区| 日本欧美国产在线视频| 国产欧美日韩综合在线一区二区| 免费高清在线观看日韩| 亚洲av电影在线进入| 日韩,欧美,国产一区二区三区| 国产不卡av网站在线观看| 欧美黑人精品巨大| 美女中出高潮动态图| 精品亚洲成国产av| 国产一区有黄有色的免费视频| 国产不卡av网站在线观看| 热99久久久久精品小说推荐| 新久久久久国产一级毛片| 久久久久久人人人人人| 国产精品久久久久久精品古装| 麻豆乱淫一区二区| 久久久久久人妻| 另类精品久久| 在线亚洲精品国产二区图片欧美| 狠狠精品人妻久久久久久综合| 九九爱精品视频在线观看| 免费黄频网站在线观看国产| 丝袜喷水一区| 国产野战对白在线观看| 人人妻,人人澡人人爽秒播 | 久久影院123| 黄色怎么调成土黄色| 亚洲 欧美一区二区三区| 毛片一级片免费看久久久久| 91精品伊人久久大香线蕉| 最近中文字幕高清免费大全6| 嫩草影院入口| 美女福利国产在线| 卡戴珊不雅视频在线播放| 国产精品一区二区在线不卡| 亚洲精品国产色婷婷电影| 久久久久精品国产欧美久久久 | 亚洲国产最新在线播放| 9热在线视频观看99| 大片免费播放器 马上看| 亚洲,一卡二卡三卡| av卡一久久| 女人爽到高潮嗷嗷叫在线视频| 国语对白做爰xxxⅹ性视频网站| 亚洲婷婷狠狠爱综合网| 亚洲欧美清纯卡通| 最近的中文字幕免费完整| 老司机靠b影院| 国产在线一区二区三区精| 精品视频人人做人人爽| 午夜福利一区二区在线看| 蜜桃在线观看..| 国产欧美亚洲国产| 亚洲视频免费观看视频| 99国产精品免费福利视频| 国产一区二区三区综合在线观看| 啦啦啦 在线观看视频| 狠狠婷婷综合久久久久久88av| 在线天堂最新版资源| 日韩av不卡免费在线播放| 街头女战士在线观看网站| 日韩精品有码人妻一区| 成人国语在线视频| 免费久久久久久久精品成人欧美视频| 亚洲精品国产一区二区精华液| 亚洲国产中文字幕在线视频| 熟女少妇亚洲综合色aaa.| 日日撸夜夜添| 国产精品人妻久久久影院| www.精华液| 中文字幕最新亚洲高清| 亚洲精品av麻豆狂野| 国语对白做爰xxxⅹ性视频网站| 国产精品久久久久久精品电影小说| 麻豆精品久久久久久蜜桃| 18禁动态无遮挡网站| 飞空精品影院首页| 男女午夜视频在线观看| 国产成人午夜福利电影在线观看| 人妻一区二区av| 制服丝袜香蕉在线| 成人手机av| 免费观看a级毛片全部| 性少妇av在线| 国产精品免费大片| 久久久精品94久久精品| 我的亚洲天堂| 2021少妇久久久久久久久久久| 老司机亚洲免费影院| 97人妻天天添夜夜摸| 国产亚洲精品第一综合不卡| 国产不卡av网站在线观看| videosex国产| 亚洲人成77777在线视频| 亚洲一码二码三码区别大吗| 这个男人来自地球电影免费观看 | 夜夜骑夜夜射夜夜干| 亚洲自偷自拍图片 自拍| 亚洲精品日本国产第一区| 一级黄片播放器| 成年av动漫网址| 国产在视频线精品| www日本在线高清视频| 天堂俺去俺来也www色官网| 激情视频va一区二区三区| 97在线人人人人妻| 久久天堂一区二区三区四区| 精品酒店卫生间| 国产成人精品无人区| 久久久亚洲精品成人影院| 一级毛片 在线播放| av一本久久久久| 9色porny在线观看| 精品久久久精品久久久| 亚洲一区中文字幕在线| 国产日韩欧美视频二区| 人人妻,人人澡人人爽秒播 | 亚洲精品国产av成人精品| 日韩一区二区三区影片| 黑人巨大精品欧美一区二区蜜桃| 在线观看一区二区三区激情| 少妇被粗大的猛进出69影院| 国产伦理片在线播放av一区| 国产一区二区三区综合在线观看| 精品免费久久久久久久清纯 | 另类精品久久| 午夜影院在线不卡| 亚洲欧美成人精品一区二区| 国产免费现黄频在线看| 天天躁夜夜躁狠狠久久av| 少妇被粗大猛烈的视频| 久久久久视频综合| www.熟女人妻精品国产| 国产精品女同一区二区软件| 国产在视频线精品| 麻豆精品久久久久久蜜桃| 人人妻,人人澡人人爽秒播 | 下体分泌物呈黄色| 久久久久久久大尺度免费视频| 国产精品久久久av美女十八| 久久天堂一区二区三区四区| 男女免费视频国产| 久久久精品国产亚洲av高清涩受| 欧美另类一区| 精品国产国语对白av| 晚上一个人看的免费电影| 久久亚洲国产成人精品v| 中文字幕人妻丝袜制服| 观看av在线不卡| 黑人欧美特级aaaaaa片| 精品少妇一区二区三区视频日本电影 | 国产精品嫩草影院av在线观看| 伊人久久国产一区二区| 人妻一区二区av| 国产成人精品久久二区二区91 | 中文字幕最新亚洲高清| 精品第一国产精品| 美女国产高潮福利片在线看| 极品少妇高潮喷水抽搐| 精品人妻熟女毛片av久久网站| 国产精品久久久久久人妻精品电影 | 十八禁网站网址无遮挡| 亚洲av欧美aⅴ国产| 桃花免费在线播放| 亚洲 欧美一区二区三区| 亚洲精品第二区| 国产精品久久久av美女十八| 成人免费观看视频高清| 香蕉国产在线看| 男女边吃奶边做爰视频| 亚洲精品日本国产第一区| 亚洲av成人不卡在线观看播放网 | 久久久久视频综合| 精品久久久精品久久久| 97在线人人人人妻| 人妻一区二区av| 国产成人精品久久二区二区91 | 成人18禁高潮啪啪吃奶动态图| 一级毛片我不卡| 亚洲专区中文字幕在线 | 欧美少妇被猛烈插入视频| 五月天丁香电影| 少妇精品久久久久久久| 最近中文字幕2019免费版| 十分钟在线观看高清视频www| 丰满少妇做爰视频| 老司机靠b影院| 亚洲国产日韩一区二区| 又大又黄又爽视频免费| 久久久久人妻精品一区果冻| 欧美乱码精品一区二区三区| 熟妇人妻不卡中文字幕| 欧美最新免费一区二区三区| 哪个播放器可以免费观看大片| 中文字幕最新亚洲高清| 热99国产精品久久久久久7| 嫩草影视91久久| 捣出白浆h1v1| 国产精品成人在线| 熟女少妇亚洲综合色aaa.| 亚洲国产最新在线播放| 三上悠亚av全集在线观看| 午夜福利,免费看| 中文天堂在线官网| 色综合欧美亚洲国产小说| 韩国高清视频一区二区三区| 久久精品aⅴ一区二区三区四区| 精品国产乱码久久久久久男人| 久久精品久久久久久噜噜老黄| 高清不卡的av网站| 女的被弄到高潮叫床怎么办| 在线精品无人区一区二区三| 婷婷色综合大香蕉| 热re99久久国产66热| 涩涩av久久男人的天堂| 伦理电影大哥的女人| 十八禁人妻一区二区| 少妇人妻久久综合中文| 亚洲,一卡二卡三卡| 久久久精品区二区三区| 国产又色又爽无遮挡免| 欧美精品人与动牲交sv欧美| 777米奇影视久久| 我要看黄色一级片免费的| 亚洲少妇的诱惑av| 色网站视频免费| 亚洲一级一片aⅴ在线观看| 一区二区三区乱码不卡18| 国产精品麻豆人妻色哟哟久久| 男女之事视频高清在线观看 | 捣出白浆h1v1| 欧美人与性动交α欧美精品济南到| av网站在线播放免费| 国产成人免费无遮挡视频| 狂野欧美激情性bbbbbb| 久久精品国产a三级三级三级| 亚洲视频免费观看视频| 叶爱在线成人免费视频播放| 久久久精品国产亚洲av高清涩受| 毛片一级片免费看久久久久| 日本91视频免费播放| 国产精品久久久久久人妻精品电影 | 天天操日日干夜夜撸| 成人影院久久| 少妇人妻精品综合一区二区| 成人午夜精彩视频在线观看| 一级,二级,三级黄色视频| 久久久国产一区二区| 一本大道久久a久久精品| 国产乱人偷精品视频| 最近最新中文字幕大全免费视频 | 少妇的丰满在线观看| 亚洲四区av| 人妻 亚洲 视频| 在线看a的网站| 高清av免费在线| 国产精品99久久99久久久不卡 | 免费av中文字幕在线| 久久人人爽人人片av| 亚洲中文av在线| 精品一区二区免费观看| 国产精品久久久久成人av| h视频一区二区三区| 国产一区二区三区av在线| 亚洲欧美一区二区三区久久| tube8黄色片| 国产熟女午夜一区二区三区| 欧美日韩国产mv在线观看视频| 欧美av亚洲av综合av国产av | 在线免费观看不下载黄p国产| 成年人免费黄色播放视频| 国产精品亚洲av一区麻豆 | 纵有疾风起免费观看全集完整版| 亚洲国产精品一区三区| 久久精品国产a三级三级三级| 男女下面插进去视频免费观看| 色视频在线一区二区三区| av线在线观看网站| 美女高潮到喷水免费观看| 国产国语露脸激情在线看| 精品卡一卡二卡四卡免费| 亚洲少妇的诱惑av| 免费女性裸体啪啪无遮挡网站| av天堂久久9| 丰满饥渴人妻一区二区三| 久久鲁丝午夜福利片| 亚洲国产av影院在线观看| 亚洲婷婷狠狠爱综合网| 两个人免费观看高清视频| 国产极品天堂在线| 天堂8中文在线网| 丁香六月天网| 天天躁夜夜躁狠狠久久av| 色综合欧美亚洲国产小说| 久久精品亚洲熟妇少妇任你| 日韩视频在线欧美| 亚洲av欧美aⅴ国产| 久久ye,这里只有精品| 女人被躁到高潮嗷嗷叫费观| 国产成人精品福利久久| 又大又黄又爽视频免费| 美女福利国产在线| 精品亚洲成a人片在线观看| 一级毛片黄色毛片免费观看视频| 国产一区亚洲一区在线观看| 亚洲av综合色区一区| 成人亚洲精品一区在线观看| 亚洲成国产人片在线观看| 美女国产高潮福利片在线看| 亚洲精华国产精华液的使用体验| 制服诱惑二区| av国产久精品久网站免费入址| 极品少妇高潮喷水抽搐| 国产成人精品在线电影| 国产极品粉嫩免费观看在线| 天天躁日日躁夜夜躁夜夜| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲av欧美aⅴ国产| 久久热在线av| 女人被躁到高潮嗷嗷叫费观| 午夜福利在线免费观看网站| 在线观看三级黄色| 国产日韩欧美视频二区| 国产成人精品福利久久| 一本—道久久a久久精品蜜桃钙片| 一边摸一边抽搐一进一出视频| 宅男免费午夜| 视频区图区小说| 超碰成人久久| a级片在线免费高清观看视频| 最黄视频免费看| 人妻人人澡人人爽人人| 久久久国产欧美日韩av| 伊人亚洲综合成人网| 一本大道久久a久久精品| 久久鲁丝午夜福利片| 夫妻午夜视频| www.自偷自拍.com| 成人手机av| 亚洲一区中文字幕在线| 亚洲国产日韩一区二区| 男女国产视频网站| 高清在线视频一区二区三区| 亚洲av中文av极速乱| 9191精品国产免费久久| 欧美黑人精品巨大| 国产国语露脸激情在线看| 久热爱精品视频在线9| 久久国产精品大桥未久av| 国产亚洲午夜精品一区二区久久| 久久久久久人人人人人| 国产免费福利视频在线观看| 最近中文字幕高清免费大全6| 久久精品熟女亚洲av麻豆精品| 观看av在线不卡| 亚洲四区av| 新久久久久国产一级毛片| 婷婷色麻豆天堂久久| 久久国产亚洲av麻豆专区| 欧美在线黄色| 只有这里有精品99| 亚洲国产欧美网| 免费看不卡的av| 欧美中文综合在线视频| 国产日韩欧美在线精品| 人人妻人人澡人人看| 少妇人妻精品综合一区二区| 香蕉丝袜av| 亚洲 欧美一区二区三区| 国产一卡二卡三卡精品 | 搡老乐熟女国产| 高清在线视频一区二区三区| 亚洲精品,欧美精品| av电影中文网址| 国产探花极品一区二区| 国产极品天堂在线| 观看美女的网站| 天天躁狠狠躁夜夜躁狠狠躁| 欧美xxⅹ黑人| 国产亚洲最大av| 少妇精品久久久久久久| 99香蕉大伊视频| 亚洲精品第二区| 在线天堂最新版资源| 亚洲,一卡二卡三卡| kizo精华| 男女边吃奶边做爰视频| 18禁观看日本| 亚洲精品,欧美精品| 一本大道久久a久久精品| 宅男免费午夜| 各种免费的搞黄视频| 如何舔出高潮| a级毛片在线看网站| 99九九在线精品视频| 熟女av电影| av又黄又爽大尺度在线免费看| 纯流量卡能插随身wifi吗| 操出白浆在线播放| 亚洲国产欧美日韩在线播放| 这个男人来自地球电影免费观看 | 亚洲av日韩精品久久久久久密 | 街头女战士在线观看网站| av女优亚洲男人天堂| 亚洲av电影在线进入| 精品一区二区三区av网在线观看 | 久热爱精品视频在线9| 99国产综合亚洲精品| 综合色丁香网| 精品国产一区二区三区四区第35| 哪个播放器可以免费观看大片| 午夜激情av网站| 国产av码专区亚洲av| 国精品久久久久久国模美| 国产又色又爽无遮挡免| 可以免费在线观看a视频的电影网站 | 日本欧美视频一区| 久久久久视频综合| 亚洲av成人不卡在线观看播放网 | 午夜免费男女啪啪视频观看| 男女边吃奶边做爰视频| 久久国产精品男人的天堂亚洲| 操美女的视频在线观看| 嫩草影视91久久| 别揉我奶头~嗯~啊~动态视频 | 18禁裸乳无遮挡动漫免费视频| 新久久久久国产一级毛片| 菩萨蛮人人尽说江南好唐韦庄| 亚洲美女搞黄在线观看| 久久久久精品久久久久真实原创| 女人被躁到高潮嗷嗷叫费观| 日本午夜av视频| 欧美 日韩 精品 国产| 哪个播放器可以免费观看大片| 少妇人妻精品综合一区二区| 男女边摸边吃奶| 国产 精品1| 久久99一区二区三区| 超碰成人久久| 天堂中文最新版在线下载| 亚洲四区av| 满18在线观看网站| 曰老女人黄片| 天天躁夜夜躁狠狠久久av| 亚洲欧美一区二区三区国产| a级毛片在线看网站| 97人妻天天添夜夜摸| 天天躁夜夜躁狠狠久久av| 下体分泌物呈黄色| 国产福利在线免费观看视频| 男人爽女人下面视频在线观看| 中文字幕另类日韩欧美亚洲嫩草| 高清欧美精品videossex| 男女之事视频高清在线观看 | 亚洲情色 制服丝袜| 男女床上黄色一级片免费看| 我的亚洲天堂| 九草在线视频观看| 美女福利国产在线| 老司机影院毛片| 婷婷色综合www| 成人毛片60女人毛片免费| 欧美日韩av久久| 亚洲一区二区三区欧美精品| 亚洲熟女毛片儿| 午夜av观看不卡| 午夜精品国产一区二区电影| 午夜老司机福利片| 丁香六月天网| 成人三级做爰电影| 久热这里只有精品99| 中国国产av一级| 国产成人精品福利久久| 99九九在线精品视频| 中文字幕人妻丝袜制服| 成年人午夜在线观看视频| 日韩av不卡免费在线播放| 亚洲欧美激情在线| 亚洲天堂av无毛| 自线自在国产av| 亚洲免费av在线视频| 欧美在线黄色| 精品国产露脸久久av麻豆| 亚洲一区中文字幕在线| 午夜日本视频在线| 丝袜在线中文字幕| 亚洲欧洲国产日韩| 免费黄频网站在线观看国产| 熟女少妇亚洲综合色aaa.| 另类亚洲欧美激情| 日韩成人av中文字幕在线观看| 伦理电影大哥的女人| 亚洲精品国产av成人精品| 国产女主播在线喷水免费视频网站| 亚洲欧美一区二区三区国产| 街头女战士在线观看网站| 国产欧美日韩一区二区三区在线| 日本vs欧美在线观看视频| 精品一区二区三卡| 国产精品一区二区在线不卡| 国产精品二区激情视频| 久久国产亚洲av麻豆专区| 国产熟女欧美一区二区| 成年人午夜在线观看视频| 欧美日韩成人在线一区二区| 亚洲国产成人一精品久久久| 色吧在线观看| 日韩一区二区视频免费看| 成人影院久久| 岛国毛片在线播放| 最近手机中文字幕大全| 老鸭窝网址在线观看| 性色av一级| 亚洲国产av新网站| 国产又爽黄色视频| 青春草亚洲视频在线观看| 亚洲中文av在线| 国产精品久久久久成人av| 亚洲激情五月婷婷啪啪| 丝袜在线中文字幕|