• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Belief Combination of Classifiers for Incomplete Data

    2022-04-15 04:05:18ZuoweiZhangSongtaoYeYiruZhangWeipingDingandHaoWang
    IEEE/CAA Journal of Automatica Sinica 2022年4期

    Zuowei Zhang,, Songtao Ye, Yiru Zhang, Weiping Ding,, and Hao Wang,

    Abstract—Data with missing values, or incomplete information,brings some challenges to the development of classification, as the incompleteness may significantly affect the performance of classifiers. In this paper, we handle missing values in both training and test sets with uncertainty and imprecision reasoning by proposing a new belief combination of classifier (BCC) method based on the evidence theory. The proposed BCC method aims to improve the classification performance of incomplete data by characterizing the uncertainty and imprecision brought by incompleteness. In BCC, different attributes are regarded as independent sources, and the collection of each attribute is considered as a subset. Then, multiple classifiers are trained with each subset independently and allow each observed attribute to provide a sub-classification result for the query pattern. Finally,these sub-classification results with different weights (discounting factors) are used to provide supplementary information to jointly determine the final classes of query patterns. The weights consist of two aspects: global and local. The global weight calculated by an optimization function is employed to represent the reliability of each classifier, and the local weight obtained by mining attribute distribution characteristics is used to quantify the importance of observed attributes to the pattern classification. Abundant comparative experiments including seven methods on twelve datasets are executed, demonstrating the out-performance of BCC over all baseline methods in terms of accuracy, precision,recall, F1 measure, with pertinent computational costs.

    I. INTRODUCTION

    CLASSIFICATION is a traditional and prevalent problem in data analysis, aiming to identify objects to the categories they belong to. Incompleteness problem in data is one of critical challenges in classification applications, caused by various data collection or access mechanisms. Incomplete data, also called incomplete patterns or missing data, refers to the data with missing values, attributes, or contents1To avoid ambiguity, we apply the term incomplete data for a dataset with missing values, and incomplete pattern for a pattern with missing values.. This phenomenon has affected classification applications with unsatisfactory results [1]–[3]. The incompleteness of data is a critical issue in risk-sensitive fields, such as industrial systems[4], [5], health management [6] and financial market [7].Many methods have emerged to resolve the incompleteness issues around three types of missing mechanism [4], [8]–[10]:such as missing completely at random, missing at random, not missing at random. These methods can be roughly categorized into four groups:

    1) Deletion methods. The pattern with missing values is simply discarded. The deletion method is only applicable to cases where the number of incomplete patterns accounts for a small proportion (less than 5%) of the whole dataset [8], [11].It inevitably leads to a waste of patterns that are difficult (or costly) to obtain sometimes.

    2) Model-based methods. The missing values are imputed based on statistical assumptions of joint distribution, then the fulfilled patterns are classified by conventional classifiers. For example, a (supervised) logistic regression algorithm is proposed in [12] to deal with incomplete datasets, where the missing values are modeled by performing analytic integration with an estimated conditional density function (conditioned on the observed data) based on the Gaussian mixture model(GMM) [13]. However, the approximated model is not robust enough, bringing over-fitting or under-fitting problems.

    3) Machine learning methods. Incomplete patterns are directly used to train some specific classifiers. For decision trees, in algorithm C4.5 [14], missing values are simply ignored in gain and entropy calculations, while C5.0 [15] and CART neural network [16] employ imputative frameworks[17]. In Hybrid neural networks [18], missing values are imputed by two FCM (fuzzyc-meams) based methods. In a support vector solution [19], the modified support vector machine generalizes the approach of mean imputation in the linear case by taking into account the uncertainty of the predicted outputs. In word embedding models [20], the missing attributes are usually valued as 0, which can be regarded as imputation. When most of the attributes are lost,however, the classification performance of these methods is often unsatisfactory.

    4) Estimation methods. This is the most widely used methods for dealing with incomplete data. The missing value is replaced (imputed) with an estimation [3], and then the pattern with estimations is classified by conventional classifiers (e.g., Bayes classifier [21]). We will review some estimation-based methods separately.

    There are some popular and representative methods for estimating missing values. The simplest method is the mean imputation [22], where the missing values are imputed by the mean of the observed values of the corresponding attribute.Knearest neighbor imputation (KNNI) [23] is another simple idea, in which various weights depending on the distances between the neighbors and the incomplete pattern are designed to model the different effects of neighbors on the missing values. In [24], fuzzyc-means imputation (FCMI),the missing values are imputed according to the clustering centers generated by FCM and the distances between one object and all the centers. There are also other effective methods for dealing with incomplete data, such as the selforganizing maps (SOM) imputation [9], and the regression imputation [8], [12]. In particular, a fuzzy-based information decomposition (FID) method [25] was proposed recently to address the class imbalance and missing values problem simultaneously. In FID, the incomplete pattern is imputed and used to create synthetic patterns for the minority class to rebalance the training data. In [26], it is assumed that two batches extracted randomly in the same dataset have the same distribution. Then optimal transport distances are leveraged to quantify that criterion and turn it into a loss function to impute missing data values. Besides, practical methods are proposed to minimize these losses using end-to-end learning that can exploit parametric assumptions on the underlying distributions of values. Moreover, some works [27]–[31] are devoted to multiple imputations for missing values to model the uncertainty of the incomplete pattern caused by the lack of information. For example, in [29], a novel method, based on the generative adversarial network (GAN) framework [32],attempts to model the data distribution and then performs multiple imputations by drawing numerous times to capture the uncertainty of the interpolated values. These estimationbased methods assume reasonable correlations between missing and observed values, which are not always reliable.

    In addition to the issues mentioned above in each type, most methods, such as deletion, model-based, and estimation methods, treat only missing values and do not consider the negative impact of missing values on the classification.Although machine learning-based methods can classify missing values, they do not consider the uncertainty caused by missing values in the process, therefore their performances are sometimes not robust. Targeting on these limitations, the motivation of our work is based on the following three aspects.

    1) Comparing to the deletion method, we investigate a method that does not remove any observed information since this information may be precious.

    2) Comparing to model-based and estimation methods, we develop a method that does not introduce new uncertainties since statistical assumption will cause uncertainty.

    3) Comparing to machine learning methods, we aim to improve the classification performance by modeling missing values with uncertainty and imprecision considered.

    Based on the above analysis, we observe that these methods focus on the test set and assume that the training set is complete in most cases. When the training set is incomplete,the incomplete patterns are either imputed for completion or deleted directly. Moreover, these methods tend to directly model missing values, such as estimation strategies or model prediction. However, this would bring new uncertainties because estimated values can never replace the real world. In this case, this paper aims to answer an important question:how to improve the classification accuracy of incomplete data without losing information or introducing new uncertainty information in the presence of many missing values in both training and test sets? To drive such an answer, we design a new belief combination of classifiers (BCC) method for missing data based on evidence theory.

    Evidence theory [33], [34] has been widely used in pattern classification since it is an efficient tool to characterize and combine uncertain and imprecise information, and it can well compromise (more or less) useful supplementary information provided by different sources in classifier fusion [35]–[37].For instance, a classifier combination method depending on the concepts of internal reliability and relative reliability is proposed for classifier fusion with contextual reliability evaluation (CF-CRE) [38] based on evidence theory, where the internal reliability and relative reliability capture different aspects of the classification reliability. For non-independence classifiers, the literature [35] studies a method of combining other operators (i.e., parameterizedt-norm) with the Dempster’s rule, aiming to make their behavior between the Dempster’s rule and the cautious rule. In [36], the transferable belief model (TBM) [39], an uncertain reasoning framework based on evidence theory, is employed to improve the performances of mailing address recognition systems by combining the outputs from several postal address readers(PARs). Reputably, the idea of group decision-making is introduced in [40] for reasoning with multiple pieces of evidence to identify and discount unreliable evidence automatically. The core is to construct an adaptive robust combination rule that incorporates the information contained in the consistent focal elements. These classifier fusion methods based on evidence theory have achieved satisfactory performances. However, they are designed for complete patterns. The uncertainty brought by the incompleteness can also be considered in the data process [6], [41], making one of the key ideas of this paper.

    The proposed method, named belief combination of classifiers (BCC), is able to characterize the uncertainty and imprecision caused by missing values both in training and test sets without imposing assumptive models. The main contributions of this paper cover the following aspects:

    1) A novel classification method based on evidence theory is proposed, applicable for data with missing values, where incompleteness may exist in both training and query sets. To overcome the incompleteness in patterns, classifiers are trained by subsets of attributes, leading to sub-classification results.

    2) Uncertainty and imprecision reasoning is used for missing values, used for training multiple classifiers.Afterwards, multiple evidential sub-classification results are combined for a final decision. Such designs with the context of uncertainty and imprecision make the classification results more robust.

    3) An optimization function is designed to calculate the global weight to represent the reliability of each classifier, as well as the local weight obtained by mining attribute distribution characteristics is used to quantify the importance of observed attributes to the pattern classification. Moreover,abundant experiments are conducted, demonstrating the supremacy of BCC over many conventional methods in terms of classification results.

    The remainder of this paper is organized as follows. After a brief introduction of evidence theory in Section II, the belief combination of classifiers (BCC) method for missing data is proposed in detail in Section III. Simulations results are presented in Section IV to evaluate the performance of BCC with different real datasets. Finally, Section V concludes the entire work and gives research perspectives.

    II. BASICS OF EVIDENCE THEORY

    Evidence theory, also known as Dempster-Shafer theory(DST) or the theory of belief functions, is firstly introduced by Dempster [33], then developed by Shafer in hisA Mathematical Theory of Evidence[34]. Evidence theory is considered as an extended version of fuzzy set theory and has been widely used in data fusion [36], [39], [40], decisionmaking [37], [42], [43], clustering [44]–[46] and classification[47]–[49] applications. Evidence theory is a powerful framework for imprecise probability. It works with a discernment framework (DF) Ω={ω1,...,ωc} consisting ofcexclusive and exhaustive status of a variable, the uncertainty and imprecision degree of this attributes are expressed by subsets of the power-set 2Ωwith different basic belief assignments (BBAs), also called mass functions of belief.

    In classification problems, the evidential class2In evidence theory, the term evidential refers to variables with both uncertainty and imprecision.of a patternxunder the power-set 2Ωis mathematically defined as a BBA mappingm(·) from 2Ωto [0,1], which satisfies the following conditions:m(?)=0 and

    wherem(A)>0 represents the support degree of the object associated with the elementA. In classification problems,Amay represent singleton class with |A|=1, or meta-class with|A|>1. Ω denotes all elements andm(Ω) is the degree of total ignorance. Total ignorance usually plays a neutral role in the fusion process because it characterizes the vacuous belief source of evidence.

    In multiple classifier fusion processing, each classification result can be regarded as an evidence source represented by a BBA, and then the famous Dempster’s rule is used to combine multiple BBAs, which is conjunctive, commutative and associative. The DS combination of two distinct sources of evidence characterized by the BBAsm1(·) andm2(·) over2Ωis denoted asm=m1⊕m2, and it is mathematically defined(assuming the denominator is not equal to zero) bym(?)=0,and ?A≠?∈2Ωby

    Often, the probability degrees converted from BBA are applied for decision making over singletons in DF Ω.

    There are also a few methods [31], [41], [50] based on evidence theory to deal with incomplete data. For example, a prototype-based credal classification (PCC) method is proposed in [31]. In PCC, the incomplete pattern is edited withcpossible versions, for ac-class problem, to obtaincdifferent classification results by a single classifier. Then thecresults with different weights are fused to obtain the final classification of the incomplete pattern. Although PCC can characterize uncertainty caused by missing values, the estimation strategy also introduces new uncertainty information.Besides, it assumes that the training set is complete.

    III. BELIEF COMBINATION OF CLASSIFIERS

    This section presents a belief combination of classifiers(BCC) method based on evidence theory for classifying incomplete data. The BCC can faithfully make use of the observed data without imposing any assumption on missing values.

    Given anS-dimensional dataset X in class DFΩ={ω1,...,ωc}. Thesth (s∈{1,...,S}) dimension of attributes, denoted

    For an incomplete query patternx∈X withHobserved attributes, onlyHclassifiers corresponding to theHobserved attributes could provide reliable results amongSclassifiers.

    The calculation of BCC mainly consists of three steps:evaluation of classifier reliability, evaluation of attributes’importance, and the global fusion of classifiers with decisions.

    A. Classifier Reliability Calculation

    Therefore, a set of equations are constructed as follows:

    B. Attribute Importance Calculation

    Many works have been devoted to exploring the importance of attributes in various data analysis domains [6]. In this work,the reliability matrix α of classifiers is estimated by a global optimal combination process, which may not apply to some specific patterns. The discounting factors of evidence are also related to the distributions of (missing) attributes of query patterns. As a simple example, Fig. 1 illustrates the distribution of the 1st and 5th attributes for the real dataset,named cloud, from the UCI (University of California, Irvine)repository (available at http://archive.ics.uci.edu/ml).

    Fig. 1. The distribution of attributes in different classes with cloud dataset.

    It can be reported from Fig. 1 that the distributions of the 5th attribute under the two classes are quite different.Therefore, the cross-entropy between the two classes in the 5th attribute is larger than that in the 1st attribute, indicating that the 5th attribute provides more prior information than the 1st one. Therefore, it is reasonable to assign different importance weights to different attributes so that some of the crucial attributes play more decisive roles while those who are not very useful for classification are less influential.

    It should be noted that the distribution of missing values for different patterns are different, thus, the local attribute weight also changes depending on the observed attributes ofx. This will be discussed in detail in the next part.

    C. Global Combination of Classifiers and Final Decision

    The pseudo-code of BCC method is given in Algorithm 1 for convenience.

    Algorithm 1: Belief combination of classifiers Require: Training set: ; Test set:; A basic classifier: .Ensure: Class decision results XtrainXtest Xtrain={x1,...,xM}Xtest={x1,...,xN}C 1: function BCC training data , test data S Xst rain 2: Reconstruct the training subsets, ;S Cs 3: Train corresponding basic classifiers, ;4: Construct the reliability optimization equations by (6) and(7);5: Optimize classifier reliability matrix by (10) with constraint of (8);6: Obtain the Pearson product-moment correlation coefficient by (11);7: Calculate local attribute importance by (12);i=1 ρs 8: for to N do 9: Obtain the classification results of with the trained classifiers;xi 10: Calculate the joint discounting factor by (13);?γh γs 11: Extract the relative discounting factor by (14a) and(14b);12: Discount these different evidence by (15);13: Fuse these different evidence by (16);14: Classify the query pattern and decide by (17).15: end for 16: return class label 17: end function xi

    D. Discussion

    1) Selection of Basic Classifiers:Since we focus on improving the accuracy of classifying incomplete data rather than improving the classifier’s performance, any classifier can be employed in principle. However, in the combination process, each observed attribute is considered an independent pattern. In such a case, the patterns used to train the basic classifier are 1-dimensional. Therefore, it is better to choose those general classifiers as benchmarks rather than those designed specifically for high-dimensional data. Most of the conventional classifiers are designed with the framework of probability, so the focal elementAusually represents a specific class under the framework Ω in (7), thereby considering specific classes as an admissible solution of the classification. Nevertheless, there are also some classifiers based on the framework of belief functions [47], which can generate specific classes as well as the total ignorant class Ω.In such a case,m(A) is the belief mass committed to the focal element (class)Ain (15). Of course,pi=p(ωi) is equal tom(A)if the focal elementArepresents the specific class ωiin(7).

    2) Selection of Combination Rule:For the selection of combination rule, it is known that numerous combination rules exist dealing with different kinds of evidence resources and conflicts. However, our goal is not to propose a new combination rule but to improve the classification performance by reasonably characterizing the uncertainty and imprecision caused by missing values based on evidence theory when classifying incomplete data. A number of experiments prove that this is feasible. In fact, many combination rules have been proposed when dealing with conflicting evidence, such as Smet’s rule [57], Yager’s rule [58],Dubois-Prade (DP) rule [59] and proportional conflict redistribution (PCR) rules [60].

    The property of associativity is important in our application since the fusion of multiple evidences are calculated in a sequential way in which the order makes no difference. The above rules are not associative, which makes them less attractive in applications. High conflict within evidence sources is another issue in information fusion which often makes results hardly reliable. By considering these two issues,in our method, the sources of evidence are firstly modified to prevent (possible) highly conflicts, and then combined by the Dempster’s rule to determine the final class of query patterns.Based on the Shafter’s discounting method, the whole conflicts are distributed to total ignorance Ω due to normalization when different evidences are highly conflicting or in some special low-conflict situations.

    IV. EXPERIMENT APPLICATIONS

    To validate the effectiveness of BCC method confronting missing data, a number of benchmark datasets are employed to compare with several other conventional methods based on four common criteria: 1) accuracy (AC); 2) precision (PE); 3)recall (RE); 4) F1-measure (F1) [61].

    A. Methods for Comparison

    The classification performance of this new BCC is evaluated by comparisons with several other conventional methods including mean imputation (MI) [22],K-nearest neighbors imputation (KNNI) [23], fuzzyc-means imputation(FCMI) [24], prototype-based credal classification (PCC) [31],fuzzy-based information decomposition (FID) [25], generative adversarial imputation nets (GAIN) [29] and batch Sinkhorn imputation (BSI) [26]. In MI, the missing values in the training set are replaced by the average values of the same class, and the missing values in the test set are imputed by the means of the observed values of the position in the training set. In KNNI, the incomplete pattern in the training set is estimated by the KNNs with different weights depending on the distance between the pattern and the neighbors in the same class, and the incomplete pattern in the test set is estimated by the global KNNs. Since the training set is complete by default in FCMI, PCC, and FID, we thereby use the average values of the class to impute the missing values in the training set,similar to MI. In FCMI, the missing values in the test set are imputed according to the clustering centers generated by FCM and the distances between the object and the centers. In PCC,the incomplete pattern in the test is imputed withcpossible versions for ac-class problem, while the centers ofcclasses are obtained from the training set. In FID, the missing values in the test set are estimated by taking into account different contributions of the observed data. In GAIN, generative adversarial nets are trained to estimate the missing values in the test set. In BSI, the missing values in the test set are imputed by minimizing optimal transport distances between quantitative variables. For all parameters of the compared methods, we use the default values as in the original papers.

    Different from the above methods, the proposed BCC considers each attribute as an independent source, and the collection of each attribute is considered as a subset.Afterward, each subset trains a classifier independently, which allows each observed attribute to provide a sub-classification result for the query pattern. Then these sub-classification results with different weights (discounting factors) are used to provide supplementary information to jointly determine the final classes of query patterns.

    B. Basic Classifiers

    In our simulations, K-NN technique [62], evidentialKnearest neighbor (EK-NN) [47] and Bayesian classifier(Bayes) [21] are employed as the basic classifiers to generate pieces of evidence. In the Bayesian classifier, Gaussian distributions are assumed for each attribute. For the parameters in the classifier, we apply the default values identical to the original papers. The outputs of EK-NN are BBAs consisting of the singletons and the total ignorance, and the outputs of K-NN and Bayesian classifiers are probability values. Both BBAs and probability can be directly applied to the optimal combination in the BCC method as explained in Section III-D.

    C. Benchmark Datasets

    Twelve datasets from the UCI repository are used to evaluate the effectiveness of BCC with comparison to main conventional methods. The basic features of these datasets are shown in Table I, including number of classes (#Class),number of attributes (#Attr.) and number of instances (#Inst.).The size of each dataset is defined by the number of its attributes (#Attr.) and the number of instances (#Inst.).

    TABLE I BASIC INFORMATION OF THE USED DATASETS

    In the experiments, each attribute independently reconstructs a subset and trains a basic classifier. In order to demonstrate the effectiveness under different incompleteness levels, in the training and test sets, we assume that each pattern has φ missing (unobserved) values with missingcompletely-at-random mechanism. In experiments, different values of φ are employed to verify the performance of the BCC method.

    D. Performance Evaluation

    We use the simplest 2-fold cross validation. Since the size of the training and test sets are equal, all patterns can be respectively used for training and testing on each fold. The program is randomly run 10 times5All results demonstrated in this paper are average values., and the performance of BCC is shown for various φ, denoting the number of missing values for each training and test pattern, as reported in Tables II–VII. Specifically, the accuracy values (AC) for the compared methods, based on K-NN, EK-NN and Bayesian basic classifiers6The differences between the chosen classifiers are beyond this paper., are respectively reported in Tables II, IV and VI. Other indexes, PE, RE, F1, are recorded in Tables III,V, and VII, with K-NN, EK-NN and Bayesian basic classifiers integrated respectively. In Table III, taking the Cl dataset as an example, PE is the average value with different φ ( φ = 4, 6,8 in Table II), based on the K-NN classifier. In addition, the average histograms for φ with different datasets and basic classifiers are plotted in Fig. 2 to compare the effectiveness of BCC more intuitively.y-axis represents the accuracy,calculated by the average of the outputs with different φ in Tables II, IV and VI.

    From these results, it can be observed that the BCC method generally provides better results compared to other conventional methods in most cases. Moreover, these results support us to make the following analysis.

    As typical single imputation strategies, MI, KNNI, and FCMI predict possible estimations of missing values based on different mechanisms, but such estimations may not be reasonable enough. For example, in KNNI, similar patterns(neighbors) are employed to impute missing values. In this case, the selection of similarity measure norm is an essential process. If an inappropriate measure is chosen, unsatisfactory results are often obtained. Moreover, the disadvantage of direct modeling of missing values is that it is impossible to avoid bringing new uncertainties because estimation can never replace the real world. Furthermore, only modeling missing values is insufficient because the uncertainty caused by missing values can also negatively affect the classifier’s performance. Therefore, we can see that the results obtained by these methods are often not satisfactory.

    PCC and GAIN are multiple imputation strategies, which means that a missing value may be estimated as multiple versions to characterize the uncertainty. This is an improvement and reasonable in some ways, however, multiple estimations are not always better than single estimation strategies [63]. In particular, as an imputation strategy, GAIN models the uncertainty of missing values but still does not characterize the uncertainty and imprecision in the model and results from a classification perspective. On the other hand, as an evidence theory-based method, PCC is similar to the proposed BCC method in characterizing the uncertainty andimprecision caused by missing values. However, in PCC, the use of class centers as a benchmark for estimating missing values is unreasonable and does not assess the reasonableness and necessity of multiple estimations. Therefore, it can characterize the imprecision in the results, but the performance is still not good enough.

    TABLE II THE ACCURACY (AC) OF DIFFERENT METHODS WITH K-NN CLASSIFIER (%)

    As the latest work, FID, GAIN, and BSI have proposed some feasible solutions from data distribution but only partially address the problem of modeling incomplete data.For example, GAIN and BSI, as model-based methods, are dedicated to perfectly approximating the real world, which is practically impossible. In particular, BSI assumes that two random batches from the same dataset should follow the same distribution. Still, in some scenarios, the distribution of the data itself is hard to estimate. If the data distribution is not modeled precisely enough, the reasonableness of the estimations is questionable. FID is a pioneering work that works on both incomplete and imbalanced data classification.In FID, the imbalance is also considered incomplete data, so its essence is still a classification of incomplete data.However, the classification results are also less than ideal because the process does not reasonably characterize theuncertainty and imprecision caused by missing values.

    TABLE III THE PE, RE AND F1 OF DIFFERENT METHODS WITH K-NN CLASSIFIER (%)

    TABLE IV THE ACCURACY (AC) OF DIFFERENT METHODS WITH EK-NN CLASSIFIER (%)

    TABLE V THE PE, RE AND F1 OF DIFFERENT METHODS WITH EK-NN CLASSIFIER (%)

    TABLE VI THE ACCURACY (AC) OF DIFFERENT METHODS WITH BAYESIAN CLASSIFIER (%)

    TABLE VII THE PE, RE AND F1 OF DIFFERENT METHODS WITH BAYESIAN CLASSIFIER (%)

    TABLE VIII EXECUTION TIME WITH K-NN CLASSIFIER (S)

    The proposed BCC method avoids modeling missing values directly. Thus, it can handle well the cases where both the training and test sets contain many missing values without losing information and introducing new uncertainty information. We model each attribute independently while avoiding negative interactions between attributes.Simultaneously, different attributes are able to provide complementary information under the framework of evidence theory. By doing this, the distribution characteristics of different attributes are thoroughly mined, and each attribute can train a basic classifier independently. Moreover, the performance of the classifiers depends on the quality of the training sets constrained by missing values. In this case, the global measurement of the weight on each classifier is an important part of making decisions as cautious as possible. In addition, the proposed BCC method is end-to-end, which can characterize the uncertainty and imprecision in the data, the model, and the results simultaneously. Therefore, the proposed BCC is often able to outperform other comparison methods.

    Furthermore, from Fig. 3, it can be observed that the accuracy is less affected by the data incompleteness level for BCC than other methods, indicating that BCC is more robust.Indeed, the BCC can be regarded as an intermediate of multiple classifiers, effected by the combination process.Since the incompleteness is characterized by imprecision rather than uncertainty in evidence theory, the imputed values make less impact on the classification results than other methods, realized by the Dempster’s combination rule. Such a mechanism brings robustness to the BCC method.

    In addition, we can also observe that with the increase of φ,the classification accuracy results of different methods decreases in most cases. This is consistent with our intuitive perception, because a larger φ implies the loss of more attributes from a pattern. The less information the pattern contains, the more difficult it is to classify the pattern correctly, the classification accuracy thereby gradually declines. However, the performance of BCC is still better than that of other conventional methods in the same case. In addition, the increase in φ also reflects the increase in the proportion of missing from another perspective. For example,if φ = 9, for the Cloud dataset, the missing rate of the training and testing sets is 90%.

    We admit that a few issues exist in BCC, one of which is the computational cost. Since BCC is a combination mechanism of multiple classifiers, it is less efficient than a single classifier. Thus, the satisfactory results are obtained at the cost of more computational resources. Another potential issue is the combination step. The Dempster’s rule is applied to combine multiple classification results. When high conflicts still exist between discounted evidence, the rule may return an undecidable BBA. In this case, the issue becomes nonnegligible.

    Fig. 2. The average accuracy (AC) of different methods in various datasets.

    E. Computational Cost

    The execution time of the different methods with K-NN as the basic classifier is shown in Table VIII. It shows that the BCC method is indeed more time-consuming than other methods since BCC needs to spend more time training the basic classifier and optimizing the discounting factors of pieces of evidence. However, the computational cost is much less than KNNI, GAIN and BSI methods in most cases, for example, for KNNI, because searching for neighbors is a very time-consuming task as the number of patterns increases.Therefore, it is necessary to take a trade-off between performance and computational cost when using the BCC method. Generally speaking, BCC is more suitable for applications in which high classification accuracy is required whereas efficient computation is not a strong requirement.

    V. CONCLUSION

    Fig. 3. The effect of parameter φ on average accuracy (AC) within different methods over various datasets. The horizontal and vertical axis respectively represents the value of parameter φ and AC.

    Confronting the problem of classification over incomplete data, we proposed a new belief combination of classifiers(BCC) method in the framework of evidence theory under the fact that patterns in the training set and test set are incomplete.The BCC method characterizes the uncertainty and imprecision caused by missing values with the theory of belief functions. By doing so, BCC is able to make full use of observed data while introducing little impact in dealing with missing data. Consequently, it outperforms conventional classification methods for incomplete data. The core action of BCC is to construct attributes as independent sources, and each of them is used to train a classifier and thereby predict the class of query patterns. As a result, multiple outputs with different discounting factors for the query pattern are obtained. The discounting factor includes two parts: the global classifier importance and the local attribute reliability, with which the famous Dempster’s rule is employed to fuse the discounted pieces of evidence representing evidential subclasses and then determine the final belief classification for the query patterns. The effectiveness of BCC is demonstrated using various real datasets by comparisons with other conventional methods. The experimental results show that BCC significantly improves the performance in accuracy,precision, recall, and F1 measure. Furthermore, this new method is robust since it does not need to set parameters manually, making it convenient for practical applications.

    In recent methods, missing values are usually imputed by value approximation, significantly affected by deep learning approaches. However, the lack of robustness caused by overfitting and under-fitting issues has been an obstacle in applying these theoretical methods. The proposed BCC method makes a step forward by taking decisions between specific classes and total ignorance. However, it cannot yet characterize local imprecision [31], [50]. To conclude, the mathematical method given in this paper can somewhat reveal the hidden real world from missing data. In the future, we will employ these classifiers specially designed for highdimensional data, and we will explore applying a similar methodology to more missing data scenarios beyond the conventional classification. Concerning the robustness problem caused by data missing, a more general framework managing uncertainty and imprecision adaptable to various learning tasks with incomplete patterns is also in the scope of our future work.

    国产伦在线观看视频一区| 精品久久久噜噜| av国产精品久久久久影院| 色婷婷av一区二区三区视频| 亚洲欧美精品自产自拍| 日韩在线高清观看一区二区三区| 国产精品国产三级国产专区5o| av在线播放精品| 美女国产视频在线观看| 寂寞人妻少妇视频99o| 人妻一区二区av| 联通29元200g的流量卡| 欧美极品一区二区三区四区| 校园人妻丝袜中文字幕| 五月玫瑰六月丁香| 边亲边吃奶的免费视频| 国产成人a区在线观看| 狂野欧美激情性bbbbbb| 久久ye,这里只有精品| 亚洲色图av天堂| 国产大屁股一区二区在线视频| 欧美日韩国产mv在线观看视频 | 只有这里有精品99| 欧美日本视频| videos熟女内射| 热re99久久精品国产66热6| 亚洲欧美中文字幕日韩二区| 日韩伦理黄色片| 91久久精品电影网| 欧美高清成人免费视频www| 久久久久久人妻| 久久97久久精品| 大香蕉久久网| 亚洲三级黄色毛片| 极品教师在线视频| 欧美精品亚洲一区二区| 久久6这里有精品| 一边亲一边摸免费视频| 国产精品久久久久久精品古装| 国产精品偷伦视频观看了| 黄色欧美视频在线观看| 日韩欧美精品免费久久| 亚洲色图综合在线观看| 日韩三级伦理在线观看| 边亲边吃奶的免费视频| 午夜视频国产福利| 最近2019中文字幕mv第一页| 91精品伊人久久大香线蕉| 黑人猛操日本美女一级片| 婷婷色综合大香蕉| 亚洲最大成人中文| a级一级毛片免费在线观看| 久久久久精品性色| 亚洲精品国产色婷婷电影| 日韩一本色道免费dvd| 国产成人一区二区在线| 国产成人精品一,二区| 色视频www国产| 性色avwww在线观看| 国产综合精华液| 99久久中文字幕三级久久日本| 黄片无遮挡物在线观看| 久久久久久九九精品二区国产| 一级爰片在线观看| 老司机影院成人| 欧美一区二区亚洲| 我的女老师完整版在线观看| 日本-黄色视频高清免费观看| 内射极品少妇av片p| 日韩一区二区视频免费看| 精品99又大又爽又粗少妇毛片| av女优亚洲男人天堂| 一区二区三区乱码不卡18| 欧美xxxx黑人xx丫x性爽| 日本av手机在线免费观看| 成人影院久久| 久久久国产一区二区| 亚洲国产av新网站| 少妇人妻一区二区三区视频| 久久99精品国语久久久| 亚洲熟女精品中文字幕| 一区二区三区精品91| 纵有疾风起免费观看全集完整版| 一级毛片aaaaaa免费看小| 欧美极品一区二区三区四区| 中文字幕亚洲精品专区| 如何舔出高潮| 美女高潮的动态| 国产69精品久久久久777片| 2021少妇久久久久久久久久久| 观看美女的网站| 多毛熟女@视频| 一本色道久久久久久精品综合| 日韩精品有码人妻一区| 女性生殖器流出的白浆| 国产精品人妻久久久久久| 国产av一区二区精品久久 | 国产精品嫩草影院av在线观看| 久久影院123| 91aial.com中文字幕在线观看| 亚洲欧美一区二区三区黑人 | 国产精品一区www在线观看| 中国国产av一级| 中文精品一卡2卡3卡4更新| 国产国拍精品亚洲av在线观看| 在线观看三级黄色| 18禁裸乳无遮挡动漫免费视频| 亚洲内射少妇av| 丰满乱子伦码专区| 校园人妻丝袜中文字幕| 啦啦啦视频在线资源免费观看| 色视频在线一区二区三区| 免费大片18禁| 久久99热这里只频精品6学生| 久久韩国三级中文字幕| 久久国内精品自在自线图片| 女性被躁到高潮视频| 免费看日本二区| 欧美一级a爱片免费观看看| 亚洲av成人精品一区久久| 国产成人精品一,二区| av视频免费观看在线观看| 中文资源天堂在线| 亚洲性久久影院| 大香蕉久久网| 大话2 男鬼变身卡| 成人毛片a级毛片在线播放| 午夜福利在线在线| 26uuu在线亚洲综合色| 亚洲美女视频黄频| 草草在线视频免费看| www.色视频.com| 国产精品免费大片| 久久人人爽av亚洲精品天堂 | 蜜桃在线观看..| 国产精品不卡视频一区二区| 亚洲,欧美,日韩| 亚洲第一区二区三区不卡| 亚洲国产高清在线一区二区三| 肉色欧美久久久久久久蜜桃| 国产免费福利视频在线观看| 国内少妇人妻偷人精品xxx网站| 日韩av不卡免费在线播放| 91午夜精品亚洲一区二区三区| 国产真实伦视频高清在线观看| 亚洲欧美日韩无卡精品| 亚洲成人中文字幕在线播放| 国内揄拍国产精品人妻在线| av免费在线看不卡| 啦啦啦在线观看免费高清www| 精品午夜福利在线看| 多毛熟女@视频| 男男h啪啪无遮挡| a 毛片基地| 午夜激情福利司机影院| 成年免费大片在线观看| 亚洲成人一二三区av| 观看免费一级毛片| 久久久精品免费免费高清| 国产成人午夜福利电影在线观看| 免费看光身美女| 日本免费在线观看一区| 婷婷色麻豆天堂久久| 99热国产这里只有精品6| 18禁裸乳无遮挡动漫免费视频| 国产精品三级大全| 国产综合精华液| 18禁裸乳无遮挡免费网站照片| 少妇人妻精品综合一区二区| 亚洲精品456在线播放app| 亚洲图色成人| 久久久精品免费免费高清| 久久久久国产网址| 人体艺术视频欧美日本| 国产精品一区二区性色av| 亚洲真实伦在线观看| 精品久久久久久久久亚洲| 又粗又硬又长又爽又黄的视频| 久久青草综合色| 亚洲国产最新在线播放| 亚洲精品第二区| 久久精品国产亚洲av天美| 联通29元200g的流量卡| 内射极品少妇av片p| 国内少妇人妻偷人精品xxx网站| 18禁裸乳无遮挡动漫免费视频| 美女cb高潮喷水在线观看| 精品视频人人做人人爽| 国产精品久久久久久久电影| 国产女主播在线喷水免费视频网站| 亚洲一区二区三区欧美精品| 欧美bdsm另类| 国产成人精品婷婷| 91在线精品国自产拍蜜月| 国产亚洲精品久久久com| 精品一区二区三区视频在线| 亚洲欧美日韩另类电影网站 | 亚洲三级黄色毛片| 80岁老熟妇乱子伦牲交| 欧美+日韩+精品| 少妇高潮的动态图| 久久国产精品大桥未久av | 国产白丝娇喘喷水9色精品| 男人爽女人下面视频在线观看| 黑人高潮一二区| 嫩草影院新地址| 国产精品一区www在线观看| 免费看日本二区| 精品一品国产午夜福利视频| 九草在线视频观看| 成人亚洲欧美一区二区av| 国产黄色免费在线视频| 久久精品国产自在天天线| 亚洲内射少妇av| 在线观看免费高清a一片| 国产黄色视频一区二区在线观看| 国产欧美日韩一区二区三区在线 | 99久久精品热视频| 亚洲av电影在线观看一区二区三区| 一级毛片黄色毛片免费观看视频| 看免费成人av毛片| 人人妻人人添人人爽欧美一区卜 | 插阴视频在线观看视频| 国产中年淑女户外野战色| 国产日韩欧美亚洲二区| 欧美+日韩+精品| 午夜福利在线在线| 国产欧美另类精品又又久久亚洲欧美| 欧美日韩视频高清一区二区三区二| 国产片特级美女逼逼视频| 内地一区二区视频在线| 尾随美女入室| 亚洲激情五月婷婷啪啪| 亚洲欧洲日产国产| 一级毛片我不卡| 晚上一个人看的免费电影| 亚洲色图av天堂| 一级毛片黄色毛片免费观看视频| 国产免费一级a男人的天堂| 99久久精品热视频| 黄色怎么调成土黄色| 在线 av 中文字幕| 尤物成人国产欧美一区二区三区| 在线播放无遮挡| 1000部很黄的大片| 亚洲av.av天堂| 人人妻人人添人人爽欧美一区卜 | 观看免费一级毛片| 国产精品.久久久| 一区二区三区精品91| 亚洲内射少妇av| 一本久久精品| 中文在线观看免费www的网站| 国产毛片在线视频| 日本-黄色视频高清免费观看| 日韩精品有码人妻一区| 亚洲精品国产成人久久av| 最后的刺客免费高清国语| 亚洲av不卡在线观看| 看十八女毛片水多多多| 亚洲天堂av无毛| 国产亚洲最大av| 人妻少妇偷人精品九色| 精品久久久噜噜| 肉色欧美久久久久久久蜜桃| 久久国产乱子免费精品| 一级毛片电影观看| 熟女人妻精品中文字幕| 国产精品麻豆人妻色哟哟久久| 欧美激情极品国产一区二区三区 | av在线蜜桃| 亚洲欧美日韩无卡精品| 成人18禁高潮啪啪吃奶动态图 | 久久亚洲国产成人精品v| 天堂中文最新版在线下载| 3wmmmm亚洲av在线观看| 亚洲国产精品一区三区| 亚洲av免费高清在线观看| 高清在线视频一区二区三区| 欧美人与善性xxx| 国产精品国产三级专区第一集| 精品久久久久久久久av| 一级毛片我不卡| 99热这里只有精品一区| 欧美高清成人免费视频www| 交换朋友夫妻互换小说| 国产男人的电影天堂91| 18禁裸乳无遮挡免费网站照片| 久久精品国产亚洲av天美| 免费看光身美女| 亚洲一区二区三区欧美精品| 成人特级av手机在线观看| 男人爽女人下面视频在线观看| 国产高清三级在线| 两个人的视频大全免费| 日本猛色少妇xxxxx猛交久久| 日本黄色片子视频| 老司机影院毛片| 97在线视频观看| 黄色视频在线播放观看不卡| 欧美97在线视频| 制服丝袜香蕉在线| 丰满人妻一区二区三区视频av| 亚洲色图综合在线观看| 美女高潮的动态| 人妻一区二区av| 亚洲欧美日韩无卡精品| 日韩成人av中文字幕在线观看| 欧美3d第一页| 亚洲久久久国产精品| 国产成人精品久久久久久| 日本vs欧美在线观看视频 | 久久久国产一区二区| 成人无遮挡网站| 九九爱精品视频在线观看| 女性生殖器流出的白浆| 国产一区二区三区av在线| 国产综合精华液| 天堂俺去俺来也www色官网| 欧美老熟妇乱子伦牲交| 街头女战士在线观看网站| 偷拍熟女少妇极品色| 国产v大片淫在线免费观看| 免费大片黄手机在线观看| 99热国产这里只有精品6| 久久久久久久精品精品| 99热这里只有精品一区| 九九在线视频观看精品| 亚洲综合色惰| 99久久综合免费| 97在线人人人人妻| 欧美区成人在线视频| 五月玫瑰六月丁香| 久久国内精品自在自线图片| 免费少妇av软件| 日韩一本色道免费dvd| 18禁在线播放成人免费| 色视频www国产| 一个人看视频在线观看www免费| 国产一区二区三区av在线| 亚洲,一卡二卡三卡| 精品人妻一区二区三区麻豆| 精品国产露脸久久av麻豆| 日本wwww免费看| 我要看黄色一级片免费的| 国产日韩欧美在线精品| 国产高潮美女av| 一边亲一边摸免费视频| 欧美xxxx性猛交bbbb| 国产伦在线观看视频一区| 99热国产这里只有精品6| 久久午夜福利片| 国产 一区 欧美 日韩| 欧美精品一区二区大全| 成人漫画全彩无遮挡| 18禁动态无遮挡网站| 色网站视频免费| 亚洲av福利一区| 国产老妇伦熟女老妇高清| 人妻一区二区av| 一个人看的www免费观看视频| 亚洲av.av天堂| 国产精品熟女久久久久浪| 日韩欧美一区视频在线观看 | 国产精品一区二区三区四区免费观看| av播播在线观看一区| 啦啦啦中文免费视频观看日本| 伊人久久国产一区二区| 网址你懂的国产日韩在线| 日韩中文字幕视频在线看片 | 国产白丝娇喘喷水9色精品| 久久精品国产自在天天线| av卡一久久| 一级爰片在线观看| 男的添女的下面高潮视频| 夫妻午夜视频| 国产乱来视频区| 免费大片黄手机在线观看| 黄片wwwwww| 91精品伊人久久大香线蕉| 中文字幕制服av| 欧美日韩一区二区视频在线观看视频在线| 国产欧美另类精品又又久久亚洲欧美| 久久久色成人| 亚洲无线观看免费| 国产精品无大码| 一二三四中文在线观看免费高清| 九九爱精品视频在线观看| 亚洲欧美精品自产自拍| 亚洲国产精品一区三区| 免费看av在线观看网站| 亚洲经典国产精华液单| 亚洲自偷自拍三级| 三级国产精品片| 女人久久www免费人成看片| 亚洲av男天堂| 亚洲精品日本国产第一区| 80岁老熟妇乱子伦牲交| 免费高清在线观看视频在线观看| 亚洲,一卡二卡三卡| 又黄又爽又刺激的免费视频.| 午夜福利网站1000一区二区三区| 99九九线精品视频在线观看视频| 熟妇人妻不卡中文字幕| 高清黄色对白视频在线免费看 | 啦啦啦在线观看免费高清www| 韩国高清视频一区二区三区| 全区人妻精品视频| 亚洲色图综合在线观看| 中文字幕人妻熟人妻熟丝袜美| 你懂的网址亚洲精品在线观看| 欧美少妇被猛烈插入视频| 亚洲成人一二三区av| 99视频精品全部免费 在线| 一级a做视频免费观看| 超碰97精品在线观看| 你懂的网址亚洲精品在线观看| 在线观看国产h片| 中文字幕免费在线视频6| 天堂中文最新版在线下载| 国产视频首页在线观看| 极品教师在线视频| 成年美女黄网站色视频大全免费 | 精品亚洲成国产av| 永久免费av网站大全| 国精品久久久久久国模美| 免费黄频网站在线观看国产| 亚洲精品,欧美精品| 欧美xxxx黑人xx丫x性爽| 欧美变态另类bdsm刘玥| 婷婷色综合大香蕉| 下体分泌物呈黄色| 国产探花极品一区二区| 亚洲精品乱久久久久久| 日本黄色片子视频| 亚洲国产高清在线一区二区三| 天美传媒精品一区二区| 中文字幕久久专区| 国产精品无大码| 高清欧美精品videossex| 高清视频免费观看一区二区| 亚洲av男天堂| 干丝袜人妻中文字幕| 国产熟女欧美一区二区| 成人漫画全彩无遮挡| 老熟女久久久| 99热这里只有是精品50| 午夜福利在线在线| 国模一区二区三区四区视频| h视频一区二区三区| 日韩伦理黄色片| 丝袜脚勾引网站| 女人久久www免费人成看片| 一级a做视频免费观看| 中文字幕制服av| 亚洲激情五月婷婷啪啪| 王馨瑶露胸无遮挡在线观看| 国产av一区二区精品久久 | 美女xxoo啪啪120秒动态图| 免费人成在线观看视频色| 欧美精品人与动牲交sv欧美| 亚洲色图综合在线观看| 成人午夜精彩视频在线观看| 草草在线视频免费看| 超碰97精品在线观看| 三级国产精品欧美在线观看| 亚洲婷婷狠狠爱综合网| 人妻一区二区av| 九色成人免费人妻av| 亚洲精品国产av蜜桃| 精品人妻偷拍中文字幕| 国产免费一区二区三区四区乱码| 亚洲无线观看免费| 中文精品一卡2卡3卡4更新| 亚洲人成网站在线播| 亚洲欧美精品专区久久| 国产黄色视频一区二区在线观看| 亚洲成人av在线免费| 一区二区三区四区激情视频| 午夜免费男女啪啪视频观看| 欧美精品国产亚洲| 国产久久久一区二区三区| 欧美亚洲 丝袜 人妻 在线| 中文字幕精品免费在线观看视频 | 成人亚洲欧美一区二区av| 亚洲av电影在线观看一区二区三区| 日韩视频在线欧美| 日日撸夜夜添| 久久久久久久国产电影| 久久热精品热| 国产美女午夜福利| 国产精品久久久久久久电影| 久久韩国三级中文字幕| 国产伦理片在线播放av一区| 日本猛色少妇xxxxx猛交久久| 精品亚洲成a人片在线观看 | 国产乱来视频区| 日韩欧美 国产精品| 免费观看的影片在线观看| 三级经典国产精品| 久久久久久久精品精品| 97在线视频观看| 王馨瑶露胸无遮挡在线观看| 国产精品99久久99久久久不卡 | 国产一区有黄有色的免费视频| 久久久久精品久久久久真实原创| 国产乱人偷精品视频| 菩萨蛮人人尽说江南好唐韦庄| 九九爱精品视频在线观看| 国产成人91sexporn| 中文在线观看免费www的网站| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲精品aⅴ在线观看| 亚洲欧美一区二区三区国产| 日本与韩国留学比较| 欧美成人午夜免费资源| 亚洲一级一片aⅴ在线观看| 久热这里只有精品99| 成人漫画全彩无遮挡| 内射极品少妇av片p| 丰满少妇做爰视频| 午夜日本视频在线| 国产精品蜜桃在线观看| 国产欧美另类精品又又久久亚洲欧美| 精品一品国产午夜福利视频| 黄色怎么调成土黄色| 成人免费观看视频高清| 丝袜脚勾引网站| 免费观看a级毛片全部| tube8黄色片| 免费久久久久久久精品成人欧美视频 | 国产又色又爽无遮挡免| 亚洲aⅴ乱码一区二区在线播放| 高清午夜精品一区二区三区| 国产淫片久久久久久久久| 精品人妻视频免费看| videos熟女内射| a级毛片免费高清观看在线播放| 亚洲久久久国产精品| 亚洲综合精品二区| 亚洲精品,欧美精品| 久久久久久久久久久丰满| 国产精品欧美亚洲77777| 久久鲁丝午夜福利片| 2021少妇久久久久久久久久久| 免费高清在线观看视频在线观看| 久久精品国产鲁丝片午夜精品| 91久久精品国产一区二区三区| 成人亚洲精品一区在线观看 | 一级毛片黄色毛片免费观看视频| 亚洲精品成人av观看孕妇| 亚洲怡红院男人天堂| 人妻 亚洲 视频| 久久6这里有精品| 国产精品一区二区性色av| 亚洲精品456在线播放app| 亚洲自偷自拍三级| 内射极品少妇av片p| 日韩亚洲欧美综合| 一区在线观看完整版| 日本欧美视频一区| 久久青草综合色| 日本黄大片高清| 下体分泌物呈黄色| 大话2 男鬼变身卡| 欧美日韩国产mv在线观看视频 | tube8黄色片| 精品国产乱码久久久久久小说| 韩国高清视频一区二区三区| av福利片在线观看| 啦啦啦在线观看免费高清www| 亚洲av综合色区一区| 亚洲精品自拍成人| 简卡轻食公司| 纵有疾风起免费观看全集完整版| 亚洲精品国产av成人精品| 亚洲最大成人中文| 深爱激情五月婷婷| 在线播放无遮挡| 成人影院久久| 小蜜桃在线观看免费完整版高清| 色哟哟·www| 欧美97在线视频| 高清av免费在线| 99久久精品一区二区三区| 国产男女内射视频| 成人一区二区视频在线观看| 午夜激情久久久久久久| 免费少妇av软件| xxx大片免费视频| 大又大粗又爽又黄少妇毛片口| 男女国产视频网站| 人体艺术视频欧美日本| 亚洲成人一二三区av| 国产国拍精品亚洲av在线观看| 欧美激情极品国产一区二区三区 | 一区在线观看完整版| 99久久精品一区二区三区| 能在线免费看毛片的网站| 国产精品无大码| 国产淫语在线视频| 亚洲高清免费不卡视频| 免费高清在线观看视频在线观看| 在线观看一区二区三区激情| 久久精品久久精品一区二区三区| 成人毛片a级毛片在线播放| 亚洲不卡免费看| 久久影院123| 免费观看在线日韩| 亚洲国产成人一精品久久久| 激情五月婷婷亚洲| 亚洲电影在线观看av| 18禁动态无遮挡网站| 国产精品三级大全| 国产av精品麻豆|