• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Clustering Analysis of Interval Data Based on Kernel Density Estimation

    2021-01-08 03:57:48LIMengyaoXIALiyunLIUYeCHENJiaolong

    LI Mengyao,XIA Liyun,LIU Ye,CHEN Jiaolong

    (1.Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing,Hunan Normal University, Changsha 410081, China;2.College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China;3.Hunan Normal University Journals,Hunan Nornal University,Changsha 410081,China)

    Abstract:As one of the vital tasks in mining interval data, clustering faces stupendous difficulties on mea-suring similarity or distance between objects. Existing traditional clustering methods have been extended to interval data via geometric distances which mainly consider the bounds of the interval data. These methods neglect information inside the interval data. Therefore, we take the probability distributions of interval value into consideration by using the whole interval data to estimate the probability density function of one cluster. In order to estimate the probability density function of one cluster, we propose a new kernel density estimation approach which is a nonparametric estimation for interval data. Then, we define a distance between interval objects via the probability density function by the new kernel density estimation. Finally, we construct an adaptive clustering method for interval data. Experimental results show that the proposed method is effective and also indicate that it is more reasonable to use probability distribution of interval value than to only consider the endpoints of intervals.

    Key words:clustering;interval data;probability distribution;density estimation

    0 Introduction

    Clustering problem has been deemed as a significant issue in the data mining[1], machine lear-ning[2], data streams[3-4],and information retrie-val[5]. It tries to group data into clusters. Therefore, the objects in the same cluster have high similarity degree and the objects belonging to different clusters have a high diversity degree.

    Through cluster analysis, we can discover more knowledge from data. Cluster analysis methods can be divided into hierarchical methods[6], partition methods[7], density-based clustering methods[8]and so on. This article focuses on the partition methods which divide input data into a fixed number of clusters. K-means[9], one representative partition method, divides the samples having high similarity or low distance into a cluster. Due to its excellent speed and good scalability, K-means clustering algorithm is considered as a renowned clustering method.

    Uncertainty is an important issue in data ana-lysis[10-11]. Nowadays, in different application fields, the method of representing uncertain data as interval data is increasingly commonplace[12]. Interval data have three characteristics: randomness, vagueness and imprecision. For the above characteristics, more and more scholars pay attention to interval data.The information hidden in interval data is that each point has chance to be true value with different probabilities in real life. So we take kernel density estimation into analyzing pro-bability on interval data. Concerning clustering methods, Souza[13]proposed clustering approach based on City-block distance for interval data. Carvalho introduced a dynamic clustering algorithm for interval data[14]. Bock[15]discussed the probabilistic and reasoning framework of cluster analysis, rather than heuristic algorithm. Jin[16]proposed a method according to mixed interval slope technique and interval calculation. Based on Wasserstein distance, Verde[17]presented a clustering technique in interval data. Based on adaptive Mahalanobis distance, Hajjar[18]proposed a self-organizing mapping approach to realize interval data clustering with topology protection. Carvalho[19]proposed the adaptive Hausdorff distances. Mali defined the similarity and dissimilarity measures between interval dataviatheir position, span and content and then introduced this distance into clustering algorithm[20].

    Though there have been some methods about measuring the similarity or distance between intervals, they mainly focus on the endpoints of an interval and ignore the probability distribution on the possible range. As a matter of fact, we can regard an interval as a variable in the interval value range. Each point has chance to be true value with diffe-rent probabilities in real life. We intend to estimate the possibility of each point in the whole interval for respective cluster. Using the probability distribution, we measure the similarity or distance between objects and each cluster. So we extend the kernel density estimation to interval data in order to estimate the probability distribution of a cluster and propose a new method to measure the distance between one object and a cluster. Moreover, we construct a clustering approach for interval data based on the proposed distance measure.

    This paper is organized as follows. Section 1 reviews the basic clustering methods about interval-valued data. Then we recall the kernel density estimation on single value. In Section 2, we propose the kernel density estimation on interval data and put it into the adaptive clustering methods with defined distance. Section 3 performs the experiments to show the efficiency of the adaptive clustering algorithm. Section 4 concludes the whole paper.

    1 Preliminary knowledge

    We review the dynamic clustering of interval data in the first place, and then we introduce the basic knowledge of the kennel density estimation.

    1.1 Dynamic clustering of interval data

    LetΩbe a set of objects. Every objectxiis represented by a vector of feature valuesxi=(xi1,…,xip), wherexij=[aij,bij]. Dynamic clustering divides the wholeΩintoKclusters {C1,…,CK} and we obtain the result of partitionP=(C1,…,CK) by optimizing the given clustering criteria.

    (1)

    wheredk(xi,yk) is a criterion to measure the dissimilarity between an objectxi∈CKand the class prototypeykofCKin whichidenotes theith element andKrepresents theKth cluster. In a word, representation steps and allocation steps are the focus of dynamic clustering algorithms:

    (a) Representation step

    (b) Allocation step

    In this step, we fix the vector of prototypesY. ByPwhich can be captured through minimizingW(P,Y,d), we can find the clusterCK={xi∈Ω|d(xi,xj)≤d(xi,yk), ?k=1,…,K}

    1.2 Kernel density estimation on single value

    Kernel density estimation, a nonparametric method, can estimate probability density function for continuous variables. Gaussian kernel is widely used because of its continuity and differentiability. In this case, gaussian function is used to weight the data points. Kernel density estimation is quite different from parametric estimation. Kernel density estimation doesn’t need assuming a specific density model in advance. In this paper, we mainly consider the popular Gaussian kernels. For ?xi∈Ω,xidenotes the object. Each Gaussian kernel function involving bandwidthhtakes a sample pointxi∈Pas its center. Using bandwidth, we can control the level of smoothness. The kernel density estimation is stated as follows:

    (2)

    Ind-dimensional (d≥2) case, Gaussian functions forhj(1≤j≤d) can be written as follows:

    (3)

    2 Clustering interval data based on the probability distribution

    In this section, we will propose the kernel density estimation on interval value in order to provide a theory to find the density function which can represent a cluster instead of the prototypeykin the basic dynamic clustering methods. Then we are going to define the similarity between a cluster and an object. In the end, an adaptive clustering algorithm is constructed.

    2.1 Kernel density estimation on interval value

    Kernel density estimation is popular since it is a nonparametric way without assuming probability density model in advance. It uses the kernel density function to weight the data points. Enlightened by this idea, we propose an adaptive strategy to fit interval values.

    A set of objectsΩis made up ofxi, which is represented by a vector of feature values,xi=[ai,bi]. In one-dimensional case, we can define the kernel density estimation on interval value as:

    (4)

    wherenmeans the number of the objects and y is a point from the interval [ai,bi]. We can estimate the density function for the given feature through this way.

    Example1.To better understand the proposed kernel density estimation method for interval value, we will analyze the following examples in detail.

    (5)

    So according to Kernel density estimation, we can get the density function of thekth cluster for featurej.

    (6)

    wherenmeans the number of the objects which belong to thekth cluster,yis a point from the interval [aij,bij] andjis thejth attribute. We can estimate the density function for the given feature through this way.

    2.2 Distance between a cluster and an interval

    We can get the density function of a cluster for a feature to represent the cluster instead of the prototypeykin the basic dynamic clustering me-thods. In the next step, we will consider how to measure the distance between the cluster and the sample.Using the estimated probability density function, we know the probability that each point in each cluster will occur within the range of va-lues. For an interval, we don’t know what the true value is. But by integrating the probability density function of a cluster over the interval, we know the probability of the interval appearing in the cluster. The greater the probability of the interval appearing in the cluster means the greater the probability that the interval belongs to this cluster and the greater the similarity between the interval and this cluster. A higher similarity between the interval and the cluster is, the smaller distance between the interval and the cluster is.

    A set of objectsΩis made up ofxi, which is represented by a vector of feature values,xi=(xi1,…,xip),wherexij=[aij,bij]. Dynamic clustering divides the wholeΩintoKclusters {C1,…,CK}. We define the distance between a cluster and an object as:

    (7)

    Through the above distance formula, we can get the dissimilarity between a cluster and an object. The adequacy criterion we used is defined as follows:

    W(P)=max{dk(CK,xi),CK∈P},

    (8)

    wheredk(CK,xi) is an adaptive distance between an objectxi∈CKand clusterCK.

    Fig.1 Estimated probability densitydistribution of Example 1

    Fig.2 Schematic diagram of Example 2

    We can find the fittingkby minimizing the distance between an objectxi∈CKand the clusterCKand assign it into the reasonable cluster.

    2.3 An adaptive clustering algorithm

    In this part, we construct a clustering algorithm shown in Algorithm 1 by estimating the density function and measuring the above distance formula between the cluster and an object. When initializing the clusters, we can choose a partition randomly. Then we compute the density function of one cluster and minimize the distance between the cluster and the objects. Finally we allocatexito the most possible cluster and repeat this process until the cluster does not change anymore. For showing the detail process, an example is listed as follows.

    Example3. We choose the Iris dataset which are from the UCI Machine Learning Repository as an example to show the results of the improved algorithm. The Iris data set has 150 instances, 4 attributes and 3 classes. The three classes are Iris Virginica, Iris Versicolour and Iris Setosa. According to Algorithm 1, samples are randomly grouped into two categories. Then we get the density function of each cluster and calculate the distance from the sample to each cluster. After com-paring the distances, the sample is put into the cluster with the smallest distance. The same operation is then used to classify the second element until all elements are reclassified. Then recalculate the density function of each cluster and repeat the above operation until the allocation stays unchanged. The subfigures (a), (b) and (c) of Figure 3 are the probability density fun-ction images of the first, second and third clusters under the first attribute in the last iteration, respectively. According to subfigure (a), we can find the values of instances belonging to Iris Virginica are more likely to appear between 6.0 and 7.0. Subfigure (b) indicates that values of instances belonging to Iris Versicolour are more likely to occur between 5.5 and 6.0. Values of instances belonging to Iris Setosa are more likely to occur between 4.5 and 5.5 according to the subfigure (c). In Figure 4, we use three colors to represent the result of clustering based on Algorithm 1. We use the value of the second attribute as thexaxis and the value of the fourth attribute asyaxis. Therefore, we can describe the objects as rectangles in Figure 4.

    3 Experimental results

    In order to evaluate the performance of the proposed method comprehensively and fairly, we take ten interval data sets into account in our comparison experiments. In this section, we will first introduce the data sets used in the experiments and the evaluation of the clustering results. Then we will discuss about the value ofh. At the last part of this section, we will show the efficiency of our adaptive algorithm through comparative experiments.

    3.1 Introduction of experimental data sets and evaluation criteria of experimental results

    The data sets we used are Wine data set

    Fig.3 Density functions for each cluster

    Input: D∥the interval database K∥the number of desired clustersOutput: clusters of D1:/*Phase1-Initialization */2: for each i ∈ D do3:assign xi∈ D into C1,…,CK randomly;4: end for5:/*Phase2-Representation */6: for j=1 to m do7: for i=1 to K do8: compute density functions^f(x)jk of Ci for each feature j9: end for10: end for11:/*Phase3-Allocation */12: test← 013: for i=1 to n do14:do define the winning cluster Ck* such that k*=argmink=1,…,K∑mj=1dk(Ck,xi)15:if i ∈ Ck and k* ≠k then16:test← 117:Ck*← Ck*∪X i18:Ck← Ck-Xi19:end if20: end for21:/*Phase4-stoppingcriterion */22: if test=0 then23:stop24: else25: go to/*Phase2-Representation */.26: end if

    (Wine), the User 142 Knowledge Modeling data set (Knowledge Modeling), the Seeds data set (Seeds), Electrical Grid Stability Simulated Data Set (Electrical Stability), Image Segmentation Data Set (Image Segmentation), Facebook Live Sellers in Thailand Data Set (Live Sellers), Glass Identification Data Set (Glass), Website Phishing Data Set (Website Phishing), Somerville Happiness Survey Data Set (Happiness Survey), Ionosphere Data Set (Ionosphere), which are obtained from the UCI Machine Learning Repository. The details of these 10 data sets are shown in Table 1. They are all real data sets with numerical attributes. In order to perform the experiments, we use a preprocessing step to turn single-valued data into interval-valued data. For each objectxi, we choose the random numbers which range from 0 to 1 asr1andr2, and we turn the singe value to an interval value according toxi=[xi-r1,xi+r2]. At the beginning of the experiments, we randomly divide all the elements into K clusters. K is the known information according to the real label.

    Fig.4 A scatter diagram of the Iris basedon the two characteristics

    Table 1 Experiment data sets

    Since we have the real labels, the evaluation of the clustering results is based on the accuracy (AC) and Normalized Mutual Information(NMI). The AC evaluates the consistency between a priori partition and a partition accomplished by the clustering algorithm. AC is computed as follows:

    (9)

    In this case,nis the total number of the samples andδ(xi) is a sign function.δ(xi) gets 1 whenxiis divided correctly. Otherwise, it gets 0. Apparently, AC ranges from 0 to 1. Besides, a bigger AC means the higher accuracy.

    The Normalized Mutual Information is defined as follows:

    (10)

    Note thatC,C’ are two partitions andMI(C,C’) is the Mutual Information, which can measure the relevance between two sets.H(C) andH(C’) are the information entropies ofCandC’, respectively. The information entropy is defined as follows:

    H(C)=-∑p(x)log2(p(x)).

    (11)

    The Mutual Information is defined as follows:

    (12)

    NMI measures all the average mutual information between clusters and real labels. Note that NMI(C,C’) ranges from 0 to 1. If the two partitions are consistent, then the division is completely correct and NMI=1. On the contrary, NMI=0.

    3.2 The discussion about h

    In this part, we will discuss about the influ-ence of parameterhin the improved algorithm of clustering using kernel density estimation, and give the suggested value ofh.

    Thehin Formula 6 can affect the estimation effect of kernel density function of each cluster and further affect the clustering results. According to the requirement of kernel density estimation, the value ofhshould be small, tending to 0. Therefore, we will discuss the results ofhin the range of [0.1, 0.5] and select the suggested value. In Table 2 and Table 3, we show the clustering results whenhis 0.1, 0.2, 0.3, 0.4 and 0.5 respectively.

    It can be seen from Table 2 that the ACs of most data sets are less affected by the change ofh, except for the fourth and fifth data sets, where the ACs are better whenh=0.1 andh=0.2. From Table 3, we can see that the NMIs of most data sets are also less affected by the change ofh, except for the fourth and fifth data sets, where the NMIs are better whenh=0.1 andh=0.2. Considering the overall results, the result ofh=0.1 is more stable. We suggest thathis 0.1. And in the part of comparative experiment, we will takehas 0.1.

    Table 2 ACs under different values of h

    Table 3 NMIs under different values of h

    3.3 Comparative experiments

    We show the effect of the improved method by comparing it with other four methods on ten data sets. The details of other methods are shown as follows. The first two methods for comparison are partition methods based on City-block distance and Chebyshev distance which are the most influent distances between intervals, respectively. The other two methods are taken from reference[21], in which the Euclidean Hausdorff (EH) and Span Normalized Euclidean Hausdorff (SNEH) are used.

    The formula of City-block distance is as follows[22]:

    D(xi,xj)=|ai-aj|+|bi-bj|,

    (13)

    wherexiis [ai,bi] andxjis [aj,bj].

    The Chebyshev distance is defined as follows[23]:

    D(xi,xj)=max(|ai-aj|,|bi-bj|),

    (14)

    wherexiis [ai,bi] andxjis [aj,bj].

    Because of the influence of the initial classification, the first three methods are run 10 times to find the average ACs and NMIs.Table 4 and Table 5 give the average ACs and NMIs of the proposed clustering algorithm, clustering algorithm with City-block distance, clustering algorithm with Chebyshev distance, hierarchical clustering methods based on the EH distance and SNEH distance.

    The hardware condition and software environment are listed as follows:

    · The hardware environment: Intel(R) CPU N3450 @ 1.10 GHz 4.00 GB Memory.

    · The software environment: Matlab 9.4.

    The first data set is Wine. From Table 2 and Table 3, we can see apparently that the adaptive clustering algorithm is better than other clustering algorithms. From the view of AC, we know the adaptive clustering is better than clustering using City-block about 15 percentage points, and better than clustering using Chebyshev distance over 15 percentage points. Meanwhile, the adaptive clustering is better than hierarchical clustering based on the Euclidean Hausdorff distance over 46 percent-age points and better than hierarchical clustering based on Span Normalized Euclidean Hausdorff distance over 46 percentage points. In the view of NMI, the adaptive clustering is the best, with over 25 percentage points better than clusteringviaCity-block distance and Chebyshev distance, about 60 percentage points better than hierarchical clustering using the EH distance and over 60 percentage points better than hierarchical clusteringviathe SNEH distance.The adaptive clustering runs efficiently on the Wine data set.

    Table 4 Results of AC

    Table 5 Results of NMI

    The second data set is the Seeds Data Set. From Table 2 and Table 3, we can easily find that the adaptive clustering algorithm is much better than clustering algorithms about City-block distance and Chebyshev distance and the hierarchical clusteringviathe EH and the SNEH. From the evaluation criterion of AC, we can get that the new method is better than clustering algorithm concer-ning city-block distance about 3 percentage points, over 7 percentage points than clustering algorithm concerning Chebyshev distance, about 49 percen tage points than hierarchical clusteringviathe EH and over 50 percentage points than hierarchical clusteringviathe SNEH. From the view of NMI, the proposed method runs much better than the other four methods.

    The third data set is the User Knowledge Modeling Data Set. From Table 2 and Table 3, we can know the adaptive clustering is better than others. From the viewpoint of AC, our method outperforms the two comparative methods slightly. From the viewpoint of NMI, the adaptive clustering is better than clustering algorithm concerning City-block distance about 6 percentage points, better than clustering concerning Chebyshev distance about 7 percentage points, than hierarchical clusteringviathe EH about 14 percentage point and better than hierarchical clusteringviathe SNEH over 14 percentage points.

    The fourth data set is Electrical Grid Stability Simulated Data Set. From Table 2 and Table 3, we can know the adaptive clustering is better than others. From the perspective of AC, our method outperforms the two comparative methods. We can find the average value of ten experiments of the adaptive clustering is 71.599 0 which is better than clustering algorithm concerning City-block distance and Chebyshev distance over 14 percentage points. From the viewpoint of NMI, the adaptive clustering is better than clustering algorithm concerning City-block distance about 20 percentage points, better than clustering concerning Chebyshev distance about 20 percentage points, better than the hierarchical clustering methods using the EH and SNEH about 22 percentage points.

    The fifth data set is Image Segmentation Data Set.From Table 2 and Table 3, we can know the adaptive clustering is much better than others. From the perspective of AC, our method outperforms the four comparison methods. We can find the average value of ten experiments of the adaptive clustering is 59.393 9 which is better than clustering algorithm concerning City-block distance and Chebyshev distance over 33 percentage points and better than hierarchical clustering methods using the EH and Span SNEH over 45 percentage points. From the point of NMI, the adaptive clustering is 53.604 8 which is better than clustering algorithm concerning City-block distance and Chebyshev distance about 33 percentage points and better than the hierarchical clustering methods using the EH and SNEH about 53 percentage points. We can clearly see the adaptive clustering performs much better. It is more reasonable to use probability distribution of interval values than only considering the endpoints of intervals.

    The sixth data set is Facebook Live Sellers in Thailand Data Set. From Table 2 and Table 3, we can know the adaptive clustering is better than others. From the perspective of AC, our method is better than the two comparison methods concer-ning City-block distance and Chebyshev distance over 12 percentage points and is better than hierarchical clustering methods using the EH and SNEH about 4 percentage point. From the point of NMI, the adaptive clustering is better than the hierarchical clustering methods using the EH and SNEH distances over 4 percentage points.

    The seventh data set is Glass Identification Data Set. From Table 2 and Table 3, we can know the adaptive clustering is much better than others. From the perspective of AC, our method outperforms the four comparison methods. We can find the average value of ten experiments of the adaptive clustering is 47.149 5 which is better than clustering algorithm concerning City-block distance and Chebyshev distance over 10 percentage points and better than hierarchical clustering methods using the EH and SNEH over 12 percentage points. From the point of NMI, the adaptive clustering is 31.121 6 which is better than clustering algorithm concerning City-block distance about 15 percentage points, better than clustering algorithm concerning Chebyshev distance about 10 percentage points and better than the hierarchical clustering methods using the EH and SNEH about 30 percentage points. We can clearly see the adaptive clustering performs much better. It is more reasonable to use probabi-lity distribution of interval values than only considering the endpoints of intervals.

    The eighth data set is the Website Phishing Data Set. From Table 2 and Table 3, we can know the adaptive clustering is better than others. From the perspective of AC, our method is better than other four methods about 8 percentage point. From the viewpoint of NMI, the adaptive clus-tering is better than clustering algorithm concerning City-block distance and Chebyshev distance about 8 percentage points and better than the hierarchical clustering methods using the EH and SNEH distances over 19 percentage points.

    The ninth data set is the Somerville Happiness Survey Data Set. From Table 2 and Table 3, we can know the adaptive clustering performs better than others. From the viewpoint of AC, our me-thod outperforms the four comparative methods about 3 percentage points. From the viewpoint of NMI, the adaptive clustering is better than clus-tering algorithm concerning City-block distance and Chebyshev distance about 2 percentage points, and better than hierarchical clusteringviathe EH and the SNEH about 4 percentage point.

    The tenth data set is the Ionosphere Data Set. From Table 2 and Table 3, we can know the adaptive clustering is better than others four methods. Our method is better than the two comparison methods concerning City-block distance and Chebyshev distance over 5 percentage points from the perspective of AC. From the viewpoint of NMI, the adaptive clustering is better than clustering algorithm concerning City-block distance and Chebyshev distance about 1 percentage points and better than the hierarchical clustering methods using the EH and SNEH distances over 4 percentage points.

    In order to show the effect of the improved clustering method more clearly, we draw the clustering results of three data sets. Figure 5, Figure 6 and Figure 7 show the clustering results of Wine, Seed and Image Segmentation data sets respect-ively. In each group of graphs, Subfigure a to Sub-figure f are results of real label, clustering based on adaptive method, clustering based on City-block distance, clustering based on Chebyshev distance, hierarchical clustering methods using the EH distance and SNEH distance respectively. In each subfigure of Figure 5, we use three colors to represent three classes. We use the value of the seventh attribute as thexaxis and the value of the eighth attribute asyaxis. From Figure 5, we can clearly see that the result of the our adaptive method is most similar to the result of the real label. In each subfigure of Figure 6, there are three colors to represent three classes. We use the value of the first attri-bute as thexaxis and the value of the second attri-bute asyaxis. According to Figure 6, we can clearly find that the result of the adaptive clustering method is more similar to the result of the real label than others. There are seven colors in each subfigure of Figure 7 to represent seven classes in real label. We use the value of the eleventh attri-bute as thexaxis and the value of the twelfth attribute asyaxis. From Figure 7, we can clearly see that the result of the our adaptive method is closest to the result of the real label. The adaptive method performs much better than other methods because it divides all instances into seven classes effectively. At the same time, we can clearly findthere are only two clusters in the results of clus-tering algorithm concerning City-block distance and Chebyshev distance, and five clusters are lost.

    Fig.5 Scatter diagrams of the Wine data set based on the two selected characteristics

    Fig.6 Scatter diagrams of the Seeds data set based on the two selected characteristics

    Fig.7 Scatter diagrams of the Image Segmentation Data Set based on the two selected characteristics

    By comparing the ten experiments with the other four methods and the pictures of the results we can clearly find that the new method performs much better. Compared with hierarchical clus-tering methods using the EH and SNEH distances, our method can achieve better and more stable results. Meanwhile, the adaptive clustering method outperforms the clustering algorithm concerning City-block distance and Chebyshev distance which only consider about the endpoints of intervals. So it is reasonable to apply kernel density estimation into interval data and take into account the distribution between the intervals for avoiding the pro-blem of Cityblock distance, Chebyshev distance and Hausdorff distance which only consider the endpoints of intervals.

    4 Conclusions

    The main contribution of this paper is the application of the kernel density estimation into interval data and the construction of an adaptive clustering approach for interval data. We estimate the density function of clusters and take the distribution on the intervals into account instead of only considering the endpoints of intervals. To estimate the density fun-ction, we propose a kernel density estimation approach for interval data, which is a nonparametric way of estimation. Based on the above probability density function, we define the distance between a cluster and an instance. Moreover, we present a clustering method for interval using the defined distance. Comparative results show the effectiveness of the proposed method. One may conclude that it is not reasonable to cluster only considering the endpoints of the interval. In the future, we will further consider the adaptive parameterhto improve the fixed parameterhused in this paper.

    深夜a级毛片| 九九久久精品国产亚洲av麻豆| 欧美bdsm另类| 欧美一区二区国产精品久久精品| 18美女黄网站色大片免费观看| 国产成人影院久久av| 国产亚洲欧美98| 国产精品电影一区二区三区| 欧美日韩综合久久久久久 | 在线观看66精品国产| 国内少妇人妻偷人精品xxx网站| 久久久久久久久久黄片| 亚洲五月天丁香| 色视频www国产| 亚洲av中文字字幕乱码综合| 人人妻,人人澡人人爽秒播| 亚洲内射少妇av| 一本一本综合久久| av专区在线播放| 熟女电影av网| 深夜精品福利| 亚洲欧美清纯卡通| 国产成人a区在线观看| 欧美区成人在线视频| 久久精品夜夜夜夜夜久久蜜豆| 制服丝袜大香蕉在线| aaaaa片日本免费| 真人做人爱边吃奶动态| 好男人电影高清在线观看| 九色成人免费人妻av| 在现免费观看毛片| 免费观看精品视频网站| 日本免费一区二区三区高清不卡| 久久久久久大精品| 51国产日韩欧美| 国内精品久久久久精免费| 国产亚洲精品久久久久久毛片| 一区福利在线观看| 日日干狠狠操夜夜爽| 亚洲一区高清亚洲精品| 日韩精品中文字幕看吧| 久久久久久久久大av| 日本 欧美在线| 内地一区二区视频在线| 一区二区三区激情视频| 有码 亚洲区| 简卡轻食公司| 亚洲无线观看免费| 国产不卡一卡二| 变态另类丝袜制服| 99riav亚洲国产免费| 十八禁人妻一区二区| 国产私拍福利视频在线观看| 国产男靠女视频免费网站| 91字幕亚洲| 五月伊人婷婷丁香| 亚洲美女视频黄频| 男女做爰动态图高潮gif福利片| 欧美性猛交黑人性爽| 在线观看午夜福利视频| 校园春色视频在线观看| 亚洲在线自拍视频| 亚洲欧美日韩东京热| 伦理电影大哥的女人| 少妇高潮的动态图| 每晚都被弄得嗷嗷叫到高潮| 村上凉子中文字幕在线| 久久中文看片网| 制服丝袜大香蕉在线| 免费观看的影片在线观看| xxxwww97欧美| 真实男女啪啪啪动态图| 免费看光身美女| 成人毛片a级毛片在线播放| 搡女人真爽免费视频火全软件 | 中亚洲国语对白在线视频| 人妻夜夜爽99麻豆av| 夜夜看夜夜爽夜夜摸| 亚洲成人久久性| 成人特级黄色片久久久久久久| 亚洲第一区二区三区不卡| 麻豆成人av在线观看| 又紧又爽又黄一区二区| 三级国产精品欧美在线观看| av中文乱码字幕在线| 亚洲av免费在线观看| 欧美日韩乱码在线| 精品欧美国产一区二区三| 国产日本99.免费观看| 我的老师免费观看完整版| 精品人妻偷拍中文字幕| 亚洲美女黄片视频| 香蕉av资源在线| 国内毛片毛片毛片毛片毛片| 久久性视频一级片| 中出人妻视频一区二区| 窝窝影院91人妻| 亚洲av二区三区四区| 国产精品久久久久久亚洲av鲁大| 九九久久精品国产亚洲av麻豆| a级毛片免费高清观看在线播放| 啪啪无遮挡十八禁网站| 欧美日韩福利视频一区二区| 国产精品久久久久久久久免 | 国产精品亚洲美女久久久| 成人av一区二区三区在线看| 99久久精品热视频| 国产亚洲欧美在线一区二区| 51国产日韩欧美| 国产伦一二天堂av在线观看| 日本 av在线| 一本久久中文字幕| 免费av观看视频| 亚洲国产精品999在线| 色播亚洲综合网| 精品人妻一区二区三区麻豆 | 淫妇啪啪啪对白视频| 久久久久久九九精品二区国产| 九九热线精品视视频播放| 亚洲人成伊人成综合网2020| 欧美乱色亚洲激情| 99热只有精品国产| aaaaa片日本免费| 亚洲最大成人中文| 日本精品一区二区三区蜜桃| 欧美中文日本在线观看视频| 三级男女做爰猛烈吃奶摸视频| 在线免费观看的www视频| 最近在线观看免费完整版| 少妇的逼水好多| 最好的美女福利视频网| 夜夜爽天天搞| 国产精品一区二区免费欧美| 久久久久亚洲av毛片大全| 少妇裸体淫交视频免费看高清| 一卡2卡三卡四卡精品乱码亚洲| 一级黄色大片毛片| or卡值多少钱| 91麻豆av在线| 国产视频一区二区在线看| 天堂动漫精品| 成年女人毛片免费观看观看9| 又黄又爽又免费观看的视频| 国产成人福利小说| 成人亚洲精品av一区二区| 中文亚洲av片在线观看爽| АⅤ资源中文在线天堂| 婷婷丁香在线五月| 成人毛片a级毛片在线播放| 我要搜黄色片| 天堂网av新在线| 亚洲精品在线观看二区| 在线观看午夜福利视频| 中文资源天堂在线| 色吧在线观看| 麻豆av噜噜一区二区三区| 老熟妇乱子伦视频在线观看| 亚洲av日韩精品久久久久久密| 首页视频小说图片口味搜索| 国产一区二区三区视频了| 亚洲av电影在线进入| 成年女人毛片免费观看观看9| 小说图片视频综合网站| 国产精品精品国产色婷婷| 国产成人啪精品午夜网站| 色哟哟哟哟哟哟| 欧美日韩中文字幕国产精品一区二区三区| 在线播放国产精品三级| 午夜免费成人在线视频| 成年人黄色毛片网站| 日韩欧美在线乱码| 热99在线观看视频| 久久午夜亚洲精品久久| 亚洲最大成人中文| 国产亚洲欧美在线一区二区| 蜜桃久久精品国产亚洲av| 我的老师免费观看完整版| 我要搜黄色片| 成人欧美大片| 亚洲av.av天堂| 无人区码免费观看不卡| 熟女电影av网| 亚洲av日韩精品久久久久久密| 亚洲av免费高清在线观看| 国产精品不卡视频一区二区 | 免费观看精品视频网站| 婷婷六月久久综合丁香| 国产亚洲欧美98| 熟女电影av网| 亚洲av熟女| 成人美女网站在线观看视频| 婷婷色综合大香蕉| 18禁黄网站禁片午夜丰满| 99久久99久久久精品蜜桃| 欧美又色又爽又黄视频| 国产精品精品国产色婷婷| 天美传媒精品一区二区| 男女下面进入的视频免费午夜| 久久久久久大精品| 一本一本综合久久| 欧美成人免费av一区二区三区| 蜜桃亚洲精品一区二区三区| 蜜桃久久精品国产亚洲av| 国产综合懂色| h日本视频在线播放| 亚洲av成人不卡在线观看播放网| 97超视频在线观看视频| 18美女黄网站色大片免费观看| 成人亚洲精品av一区二区| 日韩国内少妇激情av| 久久国产乱子伦精品免费另类| 国产精品一区二区三区四区久久| 最后的刺客免费高清国语| 级片在线观看| 欧美区成人在线视频| 丰满人妻一区二区三区视频av| 日韩欧美精品免费久久 | 我的老师免费观看完整版| 色吧在线观看| 日本一二三区视频观看| 天堂影院成人在线观看| 九九在线视频观看精品| 1000部很黄的大片| 亚洲av中文字字幕乱码综合| 精品国产亚洲在线| 国产成年人精品一区二区| 毛片一级片免费看久久久久 | 90打野战视频偷拍视频| 欧美乱妇无乱码| 九色国产91popny在线| 又紧又爽又黄一区二区| 久久99热这里只有精品18| 欧美在线一区亚洲| 人妻久久中文字幕网| 欧美绝顶高潮抽搐喷水| 久久国产乱子免费精品| 国产精品女同一区二区软件 | 尤物成人国产欧美一区二区三区| 国产精品久久久久久亚洲av鲁大| 国产淫片久久久久久久久 | 免费大片18禁| 波多野结衣巨乳人妻| 亚洲av.av天堂| 一级av片app| 日韩欧美 国产精品| 色播亚洲综合网| 在线播放无遮挡| 在线免费观看的www视频| 亚洲av不卡在线观看| 在线观看av片永久免费下载| 欧美成狂野欧美在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 女生性感内裤真人,穿戴方法视频| 欧美国产日韩亚洲一区| 久久精品久久久久久噜噜老黄 | 99久久99久久久精品蜜桃| 亚洲av美国av| 丰满人妻熟妇乱又伦精品不卡| 午夜免费成人在线视频| 三级毛片av免费| 国产精品国产高清国产av| 久久这里只有精品中国| 一个人免费在线观看的高清视频| 婷婷色综合大香蕉| 色综合亚洲欧美另类图片| 亚洲七黄色美女视频| 蜜桃久久精品国产亚洲av| netflix在线观看网站| 国内少妇人妻偷人精品xxx网站| 哪里可以看免费的av片| 日本 欧美在线| 蜜桃久久精品国产亚洲av| 久久国产精品影院| 97超视频在线观看视频| 国产真实乱freesex| 成人欧美大片| 久久久久亚洲av毛片大全| 我要搜黄色片| 欧美另类亚洲清纯唯美| 三级男女做爰猛烈吃奶摸视频| 亚州av有码| 久久午夜亚洲精品久久| 看片在线看免费视频| 人人妻人人澡欧美一区二区| 少妇裸体淫交视频免费看高清| 欧美高清性xxxxhd video| h日本视频在线播放| 久久久精品欧美日韩精品| 免费观看精品视频网站| 免费高清视频大片| 又爽又黄a免费视频| 成人三级黄色视频| 欧美黄色淫秽网站| 色在线成人网| 欧美丝袜亚洲另类 | 国产一区二区在线观看日韩| 国产单亲对白刺激| 无遮挡黄片免费观看| 亚洲成人久久爱视频| 在线a可以看的网站| 69人妻影院| 久久久久免费精品人妻一区二区| 亚洲精品粉嫩美女一区| 国产精品电影一区二区三区| 亚洲内射少妇av| 国产高清三级在线| 亚洲av免费在线观看| 亚洲第一电影网av| 国产伦精品一区二区三区视频9| 男人狂女人下面高潮的视频| 午夜影院日韩av| 国产成人av教育| 亚洲一区二区三区色噜噜| 最近最新免费中文字幕在线| av天堂中文字幕网| www.www免费av| h日本视频在线播放| 岛国在线免费视频观看| 亚洲最大成人中文| 久久6这里有精品| 久久精品国产清高在天天线| 亚洲美女搞黄在线观看 | 国产精品综合久久久久久久免费| av女优亚洲男人天堂| 成年女人看的毛片在线观看| 男人的好看免费观看在线视频| 黄色日韩在线| 亚洲五月天丁香| 激情在线观看视频在线高清| 国产精品日韩av在线免费观看| 中文字幕久久专区| 少妇的逼好多水| 欧美另类亚洲清纯唯美| 欧美日韩黄片免| 999久久久精品免费观看国产| 日韩免费av在线播放| 日本免费a在线| 亚洲av不卡在线观看| 欧美激情久久久久久爽电影| 免费电影在线观看免费观看| 国内精品久久久久久久电影| 亚洲最大成人中文| 日本精品一区二区三区蜜桃| 亚洲欧美激情综合另类| 黄色日韩在线| 亚洲国产欧美人成| 欧美高清性xxxxhd video| 亚洲真实伦在线观看| 免费搜索国产男女视频| 淫妇啪啪啪对白视频| 久久人人精品亚洲av| 啪啪无遮挡十八禁网站| 亚洲在线自拍视频| 国产午夜精品久久久久久一区二区三区 | 国产一级毛片七仙女欲春2| 高清日韩中文字幕在线| 国产精品久久久久久精品电影| 18禁黄网站禁片午夜丰满| 亚洲专区国产一区二区| 中文字幕熟女人妻在线| 国产综合懂色| 自拍偷自拍亚洲精品老妇| 午夜福利在线观看吧| 91在线观看av| 欧美精品国产亚洲| 高清在线国产一区| 午夜免费成人在线视频| 免费大片18禁| 观看免费一级毛片| 中文字幕av成人在线电影| 一夜夜www| 国产成人a区在线观看| 97碰自拍视频| 成人无遮挡网站| 丁香六月欧美| 久久这里只有精品中国| 欧美xxxx性猛交bbbb| 亚洲国产精品合色在线| 国产av不卡久久| 国产v大片淫在线免费观看| 成年女人看的毛片在线观看| 午夜福利免费观看在线| 免费在线观看亚洲国产| 在线a可以看的网站| 乱人视频在线观看| 欧洲精品卡2卡3卡4卡5卡区| 亚洲精品一区av在线观看| 欧美乱妇无乱码| 少妇熟女aⅴ在线视频| 欧美性猛交黑人性爽| 麻豆一二三区av精品| 久久精品综合一区二区三区| 亚洲国产日韩欧美精品在线观看| 两个人视频免费观看高清| 欧美精品啪啪一区二区三区| 中国美女看黄片| 日韩中文字幕欧美一区二区| 国产人妻一区二区三区在| 日本免费一区二区三区高清不卡| 97超级碰碰碰精品色视频在线观看| 最好的美女福利视频网| 成熟少妇高潮喷水视频| 国产91精品成人一区二区三区| 国产色爽女视频免费观看| 美女高潮喷水抽搐中文字幕| 欧美潮喷喷水| 中文字幕熟女人妻在线| 亚洲国产精品合色在线| 18禁裸乳无遮挡免费网站照片| 午夜两性在线视频| 熟女人妻精品中文字幕| 亚洲自拍偷在线| 精品日产1卡2卡| 久久精品国产亚洲av香蕉五月| 中文在线观看免费www的网站| 亚洲,欧美精品.| 啦啦啦观看免费观看视频高清| 国产黄a三级三级三级人| 少妇的逼水好多| 超碰av人人做人人爽久久| 国产精品乱码一区二三区的特点| 18美女黄网站色大片免费观看| 欧美三级亚洲精品| 在线免费观看不下载黄p国产 | 国产精品av视频在线免费观看| 中文在线观看免费www的网站| 婷婷六月久久综合丁香| 特大巨黑吊av在线直播| 亚洲国产精品999在线| 精品久久久久久成人av| 国产一区二区在线观看日韩| 久久精品国产99精品国产亚洲性色| 人人妻,人人澡人人爽秒播| 成年女人毛片免费观看观看9| 亚洲片人在线观看| 亚洲欧美日韩高清在线视频| 国产精品精品国产色婷婷| 亚洲国产精品合色在线| 久久久久国内视频| 日韩人妻高清精品专区| 国产精品伦人一区二区| 久久精品国产亚洲av香蕉五月| 国产在线精品亚洲第一网站| 露出奶头的视频| 成人永久免费在线观看视频| 18美女黄网站色大片免费观看| 身体一侧抽搐| 日本黄色片子视频| 精品一区二区免费观看| 美女被艹到高潮喷水动态| 欧美一区二区亚洲| 波野结衣二区三区在线| 别揉我奶头~嗯~啊~动态视频| 全区人妻精品视频| 给我免费播放毛片高清在线观看| 男人舔奶头视频| 国产成人欧美在线观看| 久久久成人免费电影| 怎么达到女性高潮| 国产精品精品国产色婷婷| 国产精品国产高清国产av| 亚洲欧美清纯卡通| av天堂在线播放| 校园春色视频在线观看| 一二三四社区在线视频社区8| 亚洲午夜理论影院| 天天躁日日操中文字幕| 亚洲成人久久性| 亚洲欧美激情综合另类| 男女做爰动态图高潮gif福利片| 亚洲在线观看片| 噜噜噜噜噜久久久久久91| 99久久九九国产精品国产免费| 国产成人影院久久av| 久久热精品热| 一卡2卡三卡四卡精品乱码亚洲| 日韩欧美三级三区| 亚洲国产日韩欧美精品在线观看| 99久久无色码亚洲精品果冻| 成年女人看的毛片在线观看| 桃色一区二区三区在线观看| 亚洲综合色惰| 中文字幕高清在线视频| 久久亚洲真实| 亚洲第一区二区三区不卡| 免费在线观看成人毛片| 99久久精品热视频| 精品久久久久久久人妻蜜臀av| 精品一区二区三区视频在线| 深夜a级毛片| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 校园春色视频在线观看| 免费高清视频大片| 日本熟妇午夜| 久久性视频一级片| 色在线成人网| 欧洲精品卡2卡3卡4卡5卡区| 麻豆国产97在线/欧美| 少妇熟女aⅴ在线视频| 国产亚洲精品久久久com| 日本五十路高清| 国产精品久久电影中文字幕| 床上黄色一级片| 久久精品国产亚洲av涩爱 | 精品人妻1区二区| 亚洲avbb在线观看| 国产一区二区在线av高清观看| 亚洲va日本ⅴa欧美va伊人久久| www.www免费av| 日韩欧美国产一区二区入口| 直男gayav资源| 午夜久久久久精精品| 欧美色视频一区免费| 伦理电影大哥的女人| 久久精品综合一区二区三区| 啦啦啦观看免费观看视频高清| 我的女老师完整版在线观看| 日韩欧美在线二视频| 中文字幕人成人乱码亚洲影| 精品久久久久久成人av| 久久这里只有精品中国| www.熟女人妻精品国产| 91麻豆精品激情在线观看国产| 天堂影院成人在线观看| 男女床上黄色一级片免费看| 久久久精品大字幕| 亚洲熟妇熟女久久| 国产在线男女| 淫秽高清视频在线观看| 久久久久久久久中文| 偷拍熟女少妇极品色| 国内精品一区二区在线观看| 丝袜美腿在线中文| 激情在线观看视频在线高清| 老鸭窝网址在线观看| 99视频精品全部免费 在线| 国产精华一区二区三区| 国产午夜福利久久久久久| 久久久国产成人免费| 91午夜精品亚洲一区二区三区 | 一级作爱视频免费观看| 国产爱豆传媒在线观看| 99热6这里只有精品| 欧美bdsm另类| 久久久久久久亚洲中文字幕 | 在线播放无遮挡| 99久久无色码亚洲精品果冻| 日韩精品青青久久久久久| 日韩欧美在线二视频| 一个人免费在线观看的高清视频| 亚洲人成网站在线播放欧美日韩| 欧美丝袜亚洲另类 | 日韩亚洲欧美综合| 中文字幕人成人乱码亚洲影| 看片在线看免费视频| 久久天躁狠狠躁夜夜2o2o| 婷婷精品国产亚洲av在线| 久久国产精品人妻蜜桃| а√天堂www在线а√下载| 91久久精品电影网| 色尼玛亚洲综合影院| 噜噜噜噜噜久久久久久91| 又黄又爽又免费观看的视频| 一区二区三区免费毛片| 亚洲国产精品合色在线| 亚洲 国产 在线| 国产成人av教育| 亚洲美女黄片视频| 色综合欧美亚洲国产小说| 久久久久久久久大av| 国产av麻豆久久久久久久| 九九热线精品视视频播放| 搞女人的毛片| 一个人观看的视频www高清免费观看| 国产精品1区2区在线观看.| 一区二区三区免费毛片| 亚洲专区中文字幕在线| 欧美中文日本在线观看视频| 色综合亚洲欧美另类图片| 午夜精品一区二区三区免费看| 色哟哟·www| 日日夜夜操网爽| 国产精品精品国产色婷婷| 国产伦一二天堂av在线观看| 69av精品久久久久久| 黄色日韩在线| 好男人在线观看高清免费视频| 国产精品久久久久久亚洲av鲁大| 极品教师在线视频| 国产91精品成人一区二区三区| 啦啦啦韩国在线观看视频| 国产一区二区三区在线臀色熟女| 国产精品乱码一区二三区的特点| 精品久久久久久久末码| 免费在线观看影片大全网站| 很黄的视频免费| 午夜a级毛片| 免费观看精品视频网站| 欧美性感艳星| .国产精品久久| 麻豆成人午夜福利视频| av天堂在线播放| 亚洲国产精品久久男人天堂| 亚洲av成人av| 搡老熟女国产l中国老女人| 天天躁日日操中文字幕| 12—13女人毛片做爰片一| 首页视频小说图片口味搜索| www.色视频.com| 真人做人爱边吃奶动态| 岛国在线免费视频观看| 人妻夜夜爽99麻豆av| 老司机福利观看| 少妇的逼好多水| 少妇熟女aⅴ在线视频|