Lina Wang,Qixiang Zhang,Xiling Niu,Yongjun Ren and Jinyue Xia
1School of Artificial Intelligence,Nanjing University of Information Science and Technology,Nanjing,210044,China
2Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai),Zhuhai,519080,China
3School of Computer and Software,Nanjing University of Information Science and Technology,Nanjing,210044,China
4International Business Machines Corporation(IBM),New York,10504,USA
Abstract:Outlier detection is a key research area in data mining technologies,as outlier detection can identify data inconsistent within a data set.Outlier detection aims to find an abnormal data size from a large data size and has been applied in many fields including fraud detection,network intrusion detection,disaster prediction,medical diagnosis,public security,and image processing.While outlier detection has been widely applied in real systems,its effectiveness is challenged by higher dimensions and redundant data attributes,leading to detection errors and complicated calculations.The prevalence of mixed data is a current issue for outlier detection algorithms.An outlier detection method of mixed data based on neighborhood combinatorial entropy is studied to improve outlier detection performance by reducing data dimension using an attribute reduction algorithm.The significance of attributes is determined,and fewer influencing attributes are removed based on neighborhood combinatorial entropy.Outlier detection is conducted using the algorithm of local outlier factor.The proposed outlier detection method can be applied effectively in numerical and mixed multidimensional data using neighborhood combinatorial entropy.In the experimental part of this paper,we give a comparison on outlier detection before and after attribute reduction.In a comparative analysis,we give results of the enhanced outlier detection accuracy by removing the fewer influencing attributes in numerical and mixed multidimensional data.
Keywords:Neighborhood combinatorial entropy;attribute reduction;mixed data;outlier detection
Outlier detection is frequently researched in the field of data mining technologies and is aimed at identifying abnormal data from a data set [1].It is widely applicable for credit card fraud detection,network intrusion detection,fault diagnosis,disaster prediction,and image processing [2-8].Presently,outlier detection methods are conducted with statistical techniques [9],which include distance-based proximity,network-based and density-based proximity [10-12],data clustering [13],and rough set data modeling [14].
Statistical methods require the presence of a single feature and predicable data distribution.In real environment,many data distributions are unpredictable and multi-featured.In addition,their applications are restricted [15].Knorr et al.[16]proposed a distance-based method that required little information of the data set and was therefore suitable for any data distribution.Early outlier detection algorithms focused on mining global outliers.Contrarily,a multidimensional algorithm must outline data locally because its data sets are unevenly distributed,complex,and difficult to identify.Since Breunig et al.[10]put forward the concept of local outliers,local outlier detection has attracted considerable attention owing to its practical advantage in reducing time overhead and improving scalability.The local outlier detection algorithm starts by calculating the outlier value of local data and defining outlying data.Subsequently,the local outlier detection algorithm focuses on the local neighborhood determination to calculate local outliers [17,18].A clustering algorithm divides a data set into several clusters to show similarities and differences of objects based on their respective clusters [19].When judging a local outlying object,a cluster is considered as its neighborhood,and the local outliers of the data are calculated in the neighborhood [20].Outlier detection based on information entropy theory [21]judges the amount of information in the outlier through entropy value and determines outlying data in light of their entropies [22].The introduction of information entropy weighting effectively improved the outlier detection accuracy [23].
Many studies [24-27]focused on rough set-based detection methods,which originated from intelligent systems.Although the studies provided insufficient and incomplete information,they were introduced into outlier detection to handle categorical data.Conventional outlier detection uses more numerical data than classical rough set-based methods as they deal with categorical data.However,the processing of mixed data of categorical and numerical attributes,ubiquitous in real applications,has received inadequate attention.Through the adoption of robust neighborhood,explorations were conducted to improve classical rough sets for better performance in numerical and mixed data.At present,neighborhood rough sets are effective for attribute reduction,feature selection,classification recognition,and uncertainty reasoning.
This study aims to use a novel approach by combining attribute reduction with outlier detection technology to improve the outlier detection accuracy,reduce calculation complexity,remove fewer influencing attributes and reduce data dimensions.Data-processing plays a significant role in attribute reduction by using experimental multi-type data to achieve an accuracy boost for outlier detection.Using the neighborhood combinatorial entropy model,we construct mixed data outlier detection algorithms after defining the local outlier factor (LOF) algorithm,neighborhood combinatorial entropy algorithm and relevant concepts.Comparative data analysis is conducted to verify the detection accuracy after attribute reduction.
The rest of the paper is structured as follows.Section 2 provides the LOF algorithm,providing related definitions.Section 3 discusses the neighborhood combinatorial entropy algorithm,providing related definitions.Section 4 constructs the mixed data outlier detection algorithms based on neighborhood combinatorial entropy.Section 5 carries out the experimental analysis on the advantages of attribute reduction prior to outlier detection.Finally,Section 6 concludes the paper.
The LOF algorithm is a representative outlier detection algorithm that computes local density deviation [10].It calculates a local outlier factor of each object,and judges whether the object is an outlier that deviates from other objects or a normal point.
Definition 1:We define the distance (Euclidean distance) between objectxand objecty;two objects in object setUare as follows:
wheremrepresents the dimension of the objects;andxiandyirepresent the coordinates of theith dimension of objectsxandy,respectively.
Definition 2:We define thekth nearest distance of an objectxin object setUas:
wherepis thekth nearest neighbor to objectxin all dimensions (excluding objectx).The Euclidean distance between objectxand objectpis thekth nearest distance of objectx.As shown in Fig.1,the objecty5is the5th closest neighbor to the objectx,which concludes thatd5(x)=d(x,y5).
To determine the value ofk,the number of objects in the set satisfies the following:(1) the number of objectsq(objectxexcluded) is at leastkso thatd(x,q)≤d(x,p);(2) the number of objectq(objectxexcluded) is at mostk-1,makingd(x,q)≤d(x,p).
Definition 3:We define the neighborhood ofknearest neighbors of the objectxin object setUas:
Eq.(3) describes the set of objects within thekth nearest distance (including thekth nearest distance) of the objectx;therefore,the number of objects in the neighborhood ofxconforms to|Nk(x)|≥k.As shown in Fig.1,the 5 nearest neighbors of objectxare the set {y1,y2,y3,y4,y5}.
Definition 4:We define thekth reachable distance from the objectxto objectpin object setUas:
Thekth reachable distance fromxtopis longer between thekth nearest distance ofxand real distance fromxtop.The distances fromxto thek-nearest objects fromxare equal and aredk(x).The distance between the objectxand other objects apart from the above-mentionedknearest objects is the real distance between the two objects.
As indicated in Fig.2,the actual distance fromx1topis smaller than the5th nearest distance ofx1.From the definition,R_dist5(x1,p)=d5(x1).The actual distance fromx2topis greater than the5th nearest distance fromx2;thus,R_dist5(x2,p)=d(x2,p).
Figure 2:The 5th reachable distances from x1 and x2 to p
Definition 5:We define the local reachable density of the objectxin object setUas:
LRDk(x)refers to the density.The larger the valueLRDk(x),the higher the density,and the more the objects belong to the same cluster.Contrarily,the smaller the valueLRDk(x),the lower the density,revealing that the objects are likely to be outliers.
If the objectxis in the same cluster as the neighboring objects,the reachable distance is likely to bedk(x),thekth nearest distance of the objectx.Otherwise,the reachable distance may bed(x,y),which is the true distance.From Eq.(4),dk(x),thekth nearest distance of objectxis smaller thand(x,y),the true distance.Hence the denominator of Eq.(5) in the same cluster is smaller than that in different clusters.The valuesLRDk(x)in the same cluster will be larger than those in different clusters.When the valueLRDk(x)is smaller,objectxis likely to be an outlier.
Definition 6:We define the local outlier factor of the objectxin object setUas:
LOFk(x)represents the average ratio of the local reachable density of the objects in the neighborhoodNk(x)and the objectxto the local reachable density of the objectx.When the valueLOFk(x)is closer to 1,the local reachable density of the objects in the neighborhoodNk(x)is closer to the local reachable density of the objectx.The objectxis likely to be in the same cluster in its neighborhood.Otherwise,when the valueLOFk(x)is smaller than 1,the local reachable density of the objectxis higher than that of its neighborhood,revealing that the objectxtends to be in the dense points.If the valueLOFk(x)is larger than 1,the local reachable density of the objectxis lower than that of its neighborhood,revealing that the objectxis likely to be an outlier.
The LOF algorithm examines whether the point is an outlier,which is carried out by comparing the local density of each objectxand its neighboring objects.The lower the density of objectx,the more likely it is an outlier.The density is calculated using the distance between objects.The farther the distance,the lower the density,and the higher the outlier.In certain objects,the neighborhood ofknearest neighbors is introduced to expand its global determination as a substitute to direct global calculation.In this way,one or more clusters of dense points are determined based on local density,thereby realizing multiple clusters identification.The cluster is comprised of multi-type data,containing categorical attribute,numerical attribute or mixed attribute.However,the algorithm of Eq.(1) can only process numerical data.The accuracy of LOF algorithm is ineffective for categorical or unknown data types.
The neighborhood combinatorial entropy calculation follows two aspects:neighborhood approximate accuracy and neighborhood conditional entropy.The neighborhood approximate accuracy is the ability to divide the system from the set perspective [28,29];whereas,the neighborhood conditional entropy is the ability to divide the system from the knowledge perspective [30].
The information system is an aspect of data mining,expressed in a four-dimensional formIS=(U,A,V,f).In this system,U={x1,x2,...,xn}is a non-empty limited set of objects;A={a1,a2,...,al}is a non-empty set of attributes;Vis the union of the attribute domainVa;andfis the information function:U*A→V,which is ?x∈U,?a∈A,f (x,a)∈Va.
In addition,attributes are categorized into conditional and decision attributes.Therefore,attribute setAis the union of conditional attribute setCand decision attribute setD,namely,A=C∪D.Moreover,the information system is defined as a decision-making systemDS=(U,C∪D,V,f).
When neighborhood range is set manually,δ,a neighborhood threshold is added to the information system and transformed into a neighborhood information systemNI=(U,A,V,f,δ).In this case,δis also called the neighborhood radius.
Definition 7:We define the distance measure of two objects on attribute subsetB?Cas:
In Eq.(7),B={a1,a2,...,am}is a subset of attributes,where
mrepresents the dimension of the object,namely,the number of attributes.
aicorresponds to theith attribute.
xandyare inU.
,y)is the square of the distance between objectxand objectyin theith dimension.
The calculations on the distance between two objects are different,which depend on attribute types [31],whereaiis a numerical attribute;and
By eliminating the influence of dimension and value range between the attributes,numerical attributes are pre-processed into dimensionless data where each attribute value falls within the range of [0,1]
where
aiis categorical attribute;and
Categorical data are converted into numerical data,and their values are determined per their categories.From Eq.(8),the data values of all numerical attributes are within the range of [0,1].Thus,we set the maximum value of the distances of different types to 1,and that of the minimum value to 0.After converting categorical data into numerical data,the distance between different objects is obtained through numerical calculations
where
aiis an unknown attribute;and
Eq.(10) is a new classification due to the unknown attribute.The data in this category belongs to a cluster different from other data,and a maximum distance of 1 is defined according to different attribute classifications in the clusters.
When processing different data categories,corresponding formulas are selected.Subsequently,the distance of the mixed data is obtained by applying Eq.(7),and the application of mixed data is realized by adopting different distance calculation following different attribute types.
DistB(x,y)becomes the effective distance in the neighborhood information system.When combined with neighborhood threshold parameterδ,it defines neighborhood relationship.
Definition 8:In a neighborhood information systemNI=(U,A,V,f,δ),the neighborhood of objectxon attribute subsetBis:
Definition 9:We define the upper approximation and lower approximation of the attribute subsetBin an object setX,X?Uas:
Their relationship is expressed in Fig.3.
Figure 3:Conceptual diagram of upper and lower approximations
Definition 10:We divide the object setUinto={D1,D2,...,Dm}according to its decision attributes.The approximate accuracy of the neighborhood of attribute subsetBis:
As shown in Fig.3,NDδB(x)?NUδB(x),so 0≤αδB(D)≤1.Compared with the upper approximation,the lower approximation of the setXincreases more significantly similar to the setB.Similarly,the value of the neighborhood approximation accuracyαδB(D)increases.However,in a probability,as the attribute subsetBincreases,the increase of the lower approximation in the setXis equal to that of the upper approximation in the setX.Therefore,the value ofαδB(D),the neighborhood approximate accuracy,remains unchanged.This case has no impact on the final result (refer to Definition 13 for a detailed analysis).
Definition 11:We define the neighborhood information entropy of attribute subsetBin the object setUas:
Definition 12:We define the neighborhood conditional entropy of attribute setQ,P?Cunder the condition ofPas:
SinceP?Cis for attribute setQ,attribute setCis a conditional attribute,andQandPare conditional attribute sets.Eq.(16) reflects the uncertainty of conditional attribute setPto conditional attribute setQ.IfQis a set of decision attributesD,NE(D|P)represents the uncertainty of a set of conditional attributesPto a set of decision attributesD.The entropy value of neighborhood conditional entropy is monotonous and bounded,and its value decreases as the conditional attribute setPincreases.
Definition 13:We divide the object setUinto={D1,D2,...,Dm} based on its decision attribute.The neighborhood combinatorial entropy ofDion attribute subsetBis defined as:
From Eq.(17),neighborhood combinatorial entropy is composed of Eqs.(14) and (16),reflecting uncertainties in set theory and knowledge,respectively.Thus,neighborhood combinatorial entropy interprets uncertainty from multiple angles,and its uncertainty is more comprehensive than that in either set theory or knowledge.
ThoughαδB(Di),the neighborhood approximation accuracy in Definition 10 is non-monotonic and does not exert influence on the final neighborhood combinatorial entropy.αδB(Di)increases or remains unchanged as the attribute subsetBincreases.However,neighborhood conditional entropy affects the result of neighborhood combinatorial entropy.Analysis in Definition 12 shows that neighborhood conditional entropy has strict monotonicity.ThoughαδB(Di)remains constant in some cases,the neighborhood combinatorial entropy maintains its monotonicity.
From the above analysis,αδB(Di)increases or remains unchanged as attribute subsetBincreases.In Eq.(17),the neighborhood conditional entropyNE(Di|B)decreases as the attribute subsetBincreases,and the neighborhood combinatorial entropyNCEδB(Di)increases as the attribute subsetBincreases.With changes in conditional attribute set,the higher the neighborhood combinatorial entropy value for a conditional attribute set,the more important a conditional attribute is to the whole system.
Definition 14:We define the significance of a conditional attributeal∈C-B(attribute subsetBexcluded) to attribute subsetBas:
First,we set an attribute subset as an empty set.Then,we compare all conditional attributes{a1,a2,...,am}according to Eq.(18).IfSig(al,B,D)takes the maximum value ofSig(al,B,D)of all conditional attributes except the attribute subsetB,the conditional attribute has the highest significance of all the conditional attributes except setB,written into attribute subsetB.By iterative procedure,more conditional attributes are obtained in descending order of significance.Based on a threshold value of significance,conditional attributes are kept in the attribute subsetB;otherwise,they are removed.When this procedure is completed,a data set of the attribute reduction is available.
From an information theory perspective,an entropy value reflects the importance of a condition attribute in an information system in determining whether the condition attribute is reduced.Entropy has the following properties:non-negativity,i.e.,H(X)≥0;and maximum value,i.e.,reflecting a measure of significance in both the set and knowledge and the uncertainty of the set and knowledge.Therefore,the measurement of significance is a multi-angle operation.
Algorithm 1:LOF algorithm Input:U={x1,x2,...,xn},k,the fixed value of the kth distance,and m,the number of outliers Output:outliers,the outlier set 1:for i to n 2:for j to n 3:Calculate Euclidean distance dimages/BZ_345_1060_2186_1079_2231.pngxi,yjimages/BZ_345_1176_2186_1195_2231.png4:End for 5:Calculate the kth nearest distance dk(xi)6:Calculate Nk(xi),the neighborhood of k nearest neighbors of xi 7:for p to n 8:Calculate reachable distance R_distkimages/BZ_345_1169_2456_1187_2501.pngxi,xpimages/BZ_345_1294_2456_1312_2501.png9:End for 10:Calculate local reachable density LRDk(xi)11:Calculate the outlier factor LOFk(xi)12:End for 13:Sort xi by the value of the LOFk(xi)
(Continued)
14:Initialize outliters=?15:for t to m 16:outliters.append(U[t])17:End for 18:Return outliters
Algorithm 2:Neighborhood combinatorial entropy algorithm I nput:Neig U/D={D1,Output:NC hborhood decision information system NI=(U,A,V,f,δ),U={x1,x2,...,xn},D2,...,Dm}EδB(D),neighborhood combinatorial entropy 1:for i to n 2:Calculate NδB(xi),the δ neighborhood of xi 3:Calculate |NδB(xi)|/|U|4:End for 5:Calculate NE(B),neighborhood information entropy 6:Calculate NE(D|B),neighborhood conditional entropy 7:for j to m 8:Calculate NUδBimages/BZ_346_696_1305_714_1351.pngxjimages/BZ_346_753_1305_771_1351.png,the upper approximation and NDδBimages/BZ_346_1508_1305_1526_1351.pngxjimages/BZ_346_1565_1305_1583_1351.png,the lower approximation 9:End for 10:Calculate αδB(D),neighborhood approximate accuracy 11:Calculate NCEδB(D),neighborhood combinatorial entropy 12:Return NCEδB(D)
Algorithm 3:Attribute reduction algorithm based on neighborhood combinatorial entropy Input:Neighborhood decision information system NI=(U,A,V,f,δ),U={x1,x2,...,xn},U D={D1,D2,...,D m}Output:Reduced set B 1:Initialize B=?2:Calculate NCEδC(D),the neighborhood combinatorial entropy of the conditional attribute C 3:while (Sig(al,B,D)/=0)4:if Sig(amax,B,D)=max{Sig(al,B,D)|al ∈C-B}5:B=B ∪{amax}6:End if 7:End while 8:Return B
In Algorithm 1,the LOF algorithm focuses on estimating Euclidean distance.Higher data dimension implies more squares and longer calculation time.In the LOF algorithm,a lower data dimension saves significant calculation time.
Algorithm 1 has limited capability for mixed data outlier detection.Algorithm 2 adopts different calculation methods for different attribute.Categorical and mixed data participate in data operation.After data treatment with Algorithm 3,attribute-reduced data are obtained and processed with LOF algorithm (Algorithm 1).
Attribute reduction filters out redundant data and shortens the calculation time without altering system applicability as some attributes are redundant and cause errors in the system.Filtering out errors improves the system’s accuracy.In Algorithm 2,neighborhood combinatorial entropy determines the significance of a conditional attribute in the attribute subset of the entire system.Applying Algorithms 2,3 and the LOF algorithm for data processing reduces the data size processed by outlier detection,improve the outlier detection accuracy,and reduce misjudgment proportion.
5.1.1 220 Randomly Generated 2-Dimensional Data
First,100 2-dimensional data of normal distribution are first randomly generated.To verify the LOF algorithm detection ability,the 100 random data are translated separately to form two clusters of data widely apart from one another.Hence,200 data are generated.As presented in Fig.4a,all dense points are divided into two clusters.The distribution of outliers is shown in Fig.4b;blue dots are the 200 dense data,and yellow dots are the 20 outliers.The outlier points are relatively far from the two clusters of dense points,providing better references and contrast for the experiment on Algorithm 1.
Figure 4:Two clusters with 2-dimensional attributes:(a) dense points (b) points with man-made outliers
5.1.2 220 Randomly Generated 8-Dimensional Data
A total of 200 8-dimensional dense data and 20 8-dimensional outlier data are randomly generated.The method generating the data is as same as that generating the randomly 2-dimensional data.
5.1.3 220 Randomly Generated 16-Dimensional Data
A total of 200 16-dimensional dense data and 20 16-dimensional outlier data are generated as the outlined in Fig.4.
5.1.4 Wisconsin Breast Cancer Data Set
The Wisconsin breast cancer data set contains 699 objects with 10 attributes.From the 10 attributes,9 attributes are numerical conditional attributes,and 1 is a decision attribute.All cases are in two categories:benign (458 cases) and malignant (241 cases).Some malignant examples are removed to form an unbalanced distribution of all the data,assisting the experimental data in accordance with the cases of few outliers in practical applications.The data set contains 444 benign instances (91.93%) and 39 malignant instances (8.07%).Fig.5 presents the results of malignant objects as outliers.
Figure 5:The distribution pie chart of the Wisconsin breast cancer data set
5.1.5 Annealing Data Set
The annealing data set contains 798 objects with 38 attributes.Among the 38 attributes,30 are classified condition attributes,7 are numerical condition attributes,and 1 is decision attribute.The 798 objects are in 5 categories.The first category contains 608 objects (76.19%),and the remaining 4 categories contain 190 objects (23.81%).Classes 2-5 are all regarded as outliers.The distribution is visualized in Fig.6.
Figure 6:The distribution pie chart of the annealing data set
Tab.1 lists the multidimensional data,which includes pure numerical data and data comprising numerical and categorical attributes,all of which are within the capacity of the proposed algorithm and broadened application scope of attribute reduction algorithms.
Table 1:Four multidimensional data sets
There are 20 outliers in the randomly generated 2-dimensional data set.Given the number of outliers as 20,the objects corresponding to the first 20 largest values of local outlier factor in Eq.(6) are the same as those in Fig.4b,so the detection accuracy rate is 100%.
For the 8-dimensional data,among the 20 detected outlier objects,only 18 are artificially outliers,and the others are misjudged.The experimental results are not as promising as the 2-dimensional data.In the 16-dimensional data,only 17 of the 20 artificially set objects are identified as outliers,and the others are misjudged.Tab.2 shows the accuracy of outlier detection of the LOF algorithm on the three randomly generated data sets.
Table 2:The accuracy of outlier detection by the LOF algorithm on three randomly generated data sets
As shown in Tab.2,the outlier detection accuracy on the randomly 2-dimensional data has the higher value in LOF algorithm than that on the 8-dimensional data or 16-dimensional data.In addition,the 8-dimensional data are better than the 16-dimensional data,revealing that detection accuracy decreases as dimension increases.Data dimension influences the accuracy of the outlier detection algorithm.Therefore,attributes reduction before outlier detection improves the accuracy of outlier detection.
The Wisconsin breast cancer data set comprises 39 presupposed outliers.Among the first 39 objects with the largest local outlier factor calculated by the LOF algorithm,34 are real outliers,and the other 5 are misjudged dense points.The annealing data set has 190 outliers,of which 72 outliers are real outliers,and the others are misjudged.Tab.3 shows the accuracy of the algorithm on real data sets.
Table 3:The detection accuracy of real data sets directly obtained by the LOF algorithm
Four multidimensional data sets (excluding 2-dimensional data) separately underwent data attribute reduction in Algorithm 2.In the process,the attribute subset is determined by calculating neighborhood combinatorial entropy.After attribute reduction,the attribute number of the randomly generated 8-dimensional data is still 8.By comparing the number of attributes,randomly generated 16-dimensional data reduced to 15,and 6 for the Wisconsin breast cancer data set,and 21 for the annealing data set.The variations of their attributes before and after reduction are illustrated in Fig.7.
Figure 7:The comparison of the number of attributes before and after reduction analysis of four data sets
From Fig.7,the system carries out data dimension reduction on data sets,and dimensions of some sets are not reduced,e.g.,the 8-dimensional data.By comparing the results of the algorithms,all attributes in a data set have equal influence on the system application (none are filtered out).
Algorithm 1 is adopted to detect outliers from the data sets with reduced attributes.The results are listed in Tab.4.After attribute reduction,among the top 20 objects with the largest local outlier factors of the 16-dimensional data,17 are true outliers,and the others are incorrectly judged.The accuracy rate of outlier detection is 85.00%.For the Wisconsin breast cancer data set,after attribute reduction,among the top 39 objects with the largest local outlier factors,36 are true outliers,and the others are misjudged.The accuracy rate rises to 92.31%.After attribute reduction,for the annealing data set,among the first 190 objects with the largest local outlier factors,88 are true outliers,and 102 are erroneous.The detection accuracy rate reaches 46.32%.Fig.8 presents the outlier detection results before and after attribute reduction.
Table 4:The detection accuracy of the LOF algorithm after attribute reduction in three data sets
As revealed in Fig.8,attributes identified as useless are reduced by the system,leading to a smaller data size for processing and shortening detection time.For higher outlier detection rate,the attribute reduction algorithm filtered out the error-prone attributes from the system.In addition,enhanced accuracy is obtained in two attribute-reduced data sets,an indication of the validity and effectiveness of the proposed reduction algorithm.
Figure 8:The comparison of outlier detection accuracy before and after reduction in three data sets
The accuracy of the randomly generated 16-dimensional data remains stable before and after attribute reduction,revealing that the attribute filtering rate is minimal and close to zero,which influences the system.Thus,detection accuracy is intact through attribute reduction.
This study combined an attribute reduction algorithm with the LOF algorithm to improve the accuracy of outlier detection for highly dimension numerical and mixed data.Following the algorithms of attribute reduction and the LOF outlier detection,this study analyzed the monotonicity of the attribute reduction algorithm and explained the application of the data processed in outlier detection.We performed the experiments to apply the LOF algorithm on data sets of different dimensions,to assess its performance by calculating its detection accuracy.We also inserted significance determination and attribute reduction algorithms before the LOF algorithm for the five data sets and calculated new detection accuracy.Finally,we compared the feasibility and effectiveness of the two groups of data to assess their detection accuracy before and after data attribute reduction.In comparison,the proposed method reduced data dimensions,calculation time,and improved the outlier detection accuracy when attribute reduction algorithm was combined with outlier detection technology.
Funding Statement:The authors would like to acknowledge the support of Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (SML2020SP007).The paper is supported under the National Natural Science Foundation of China (Nos.61772280 and 62072249).
Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.
Computers Materials&Continua2021年11期