• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Internal Validity Index for Fuzzy Clustering Based on Relative Uncertainty

    2022-08-24 07:00:00RefikTanjuSirmenandBurakBerkstnda
    Computers Materials&Continua 2022年8期

    Refik Tanju Sirmenand Burak Berk üstünda?

    1Graduate School of Science Engineering&Technology,Istanbul Technical University,Istanbul,34469,Turkey

    2Faculty of Computer&Informatics,Istanbul Technical University,Istanbul,34469,Turkey

    Abstract: Unsupervised clustering and clustering validity are used as essential instruments of data analytics.Despite clustering being realized under uncertainty,validity indices do not deliver any quantitative evaluation of the uncertainties in the suggested partitionings.Also,validity measures may be biased towards the underlying clustering method.Moreover,neglecting a confidence requirement may result in over-partitioning.In the absence of an error estimate or a confidence parameter,probable clustering errors are forwarded to the later stages of the system.Whereas,having an uncertainty margin of the projected labeling can be very fruitful for many applications such as machine learning.Herein,the validity issue was approached through estimation of the uncertainty and a novel low complexity index proposed for fuzzy clustering.It involves only uni-dimensional membership weights,regardless of the data dimension,stipulates no specific distribution,and is independent of the underlying similarity measure.Inclusive tests and comparisons returned that it can reliably estimate the optimum number of partitions under different data distributions,besides behaving more robust to over partitioning.Also,in the comparative correlation analysis between true clustering error rates and some known internal validity indices,the suggested index exhibited the highest strong correlations.This relationship has been also proven stable through additional statistical acceptance tests.Thus the provided relative uncertainty measure can be used as a probable error estimate in the clustering as well.Besides,it is the only method known that can exclusively identify data points in dubiety and is adjustable according to the required confidence level.

    Keywords: Machine learning;data science;clustering validity;fuzzy clustering;uncertainty;intelligent systems;data analytics

    1 Introduction

    Clustering is a widely used interdisciplinary technique for classifying dimensional data objects by categorizing them into subsets based on specified similarity or dissimilarity measures.Due to it facilitates modeling and discovery of relevant knowledge in data [1]it is an essential part of many intelligent systems in vast areas,with a recognized prominence.Data analysis applications claim to explore multivariate real-life data of certain domains and extract hidden structures in them for various practices.Clustering is therefore one of the key methods of decision-making in machine learning as well[2].

    There are diverse dichotomies in terms of clustering types,such as hardvs.soft,hierarchicalvs.flat,model-basedvs.cost-based,parametricvs.non-parametric,each with its implementations,serving different clustering aims [3].These aims are worked to achieve through different algorithms that use relevant measures like inter-cluster separation,intra-cluster compactness,density,gap,normality,etc.Algorithms can comply with different approaches such as centroid-based k-means(cf.[4]for its origins),agglomerative hierarchical single[5]and complete linkage[6],spectral[7],mixture probability models[8,9],density-based DBSCAN[10],and so on.

    Among a choice of algorithms,each with its relative advantages [11],fuzzy methods generalize hard-decision techniques by using fuzzy measures and distributing the membership degrees to each cluster[12,13].This approach helps to deal with inexact systems by providing feasibility to utilize fuzzy outcomes and perform rule-based classifications.Continuous progress observed in the applications of this technology is evident.A wealth of material is available on the topic including[14-16].

    Quite many of the ongoing efforts are on the analysis and improvement of the clustering accuracy,also on the complexity performances[17-20].Some studies instead try to further explore the clustering theory,which could even serve to develop new approaches and algorithms.In[21],the statistical aspects of clustering are discussed and the focal questions are dealt with from the statistical perspective.Also[22]explains how correlation clustering can be a potent addition to data mining.Besides,the quality and efficiency are assessed and some useful metrics are suggested in several studies like[23-26].

    Dependent largely on the structure of data,clusters obtained by a method may not necessarily be revealing for the respective application.Hence the validity of the partitioning needs to be evaluated so that the data model and the applied methods can be refined.As seen in Fig.1 and also suggested in[27],cluster validation“should be part of any cluster analysis as integral to developing a classification model”.

    Figure 1:The ground of clustering validation

    This quantitative evaluation process involves the following main issues:[27]a) Checking the randomness of the data,which can be determined by measuring cluster tendency.(Some measures such as Hopkins statistics,stability index,or some visual methods have been proposed to test the spatial randomness of the data [28-31]).b) Estimating the optimal number of clusters.c) Determining the compliance of the results to the data without any prior information.d)Comparing the results against an externally provided template.e)Comparing two sets of clusters to specify the relatively better one.

    The first three tasks require unsupervised methods that use only the intrinsic information to be obtained from the data,thus called internal methods.In contrast,supervised methods deal with measuring how well the results fit the prespecified external information (typically class labels) not available in the data set.Rather than being a distinct type,relative methods are a special use of related measures,to compare results obtained from different methods or the same method with different parameters[27,32].

    1.1 Literature Overview

    There are quite a few comparable internal and external methods and criteria that have been proposed to quantify validity.The validity function offered in[33]helps measure the cluster overlaps.The validity of a partition has also been discussed in many texts as [34]and an overall average compactness and separation measure was suggested for fuzzy clustering.Some works summarize,compare and evaluate the proposed validation measures from different aspects[35,36].

    Since the ground truth reference is available in external validation,the decision of which clustering result is closer to the truth over a comparison criterion is independent of the underlying methods or data structures.Comparison criteria involve pair-counting and/or information-theoretic measures,entropy,purity,or the like which generally relate to diversity.Some recognized ones include the Jaccard coefficient[37],the Rand index[38],the adjusted Rand index[39],the adjusted mutual information[26](better especially when dealing with unbalanced small clusters[40]),the Hubert statistics[40],and the principal component analysis based stability score[41].In addition,[3]and[42]provide an overview of comparison criteria and their properties,and[26]specifically discusses some information-theoretic measures.

    Internal indices on the other hand realize their evaluations in the absence of reference information about the data.Therefore,measures need to be devised from the data statistics.Hierarchical clustering schemes are mostly validated using the specific cophenetic correlation coefficient measure [43,44].However,typical unsupervised measures for single schemes arecompactness,which represents how cohesive the objects in a cluster are,separation,which represents how distinct one cluster is from the others,or a combination of these [27].A vast literature is available explaining them with variations,such as[3,27,42].Validity is determined through statistical hypothesis testing based on these criteria.Besides,the other major objective that internal indices serve is to estimate the optimum number of clusters,by testing a range of cluster counts and determining the number that best yields the criteria.

    Validity indices specifically designed for fuzzy clustering calculate measures usually over fuzzy membership.While some require using also the data,others solely or mainly involve membership weights[45,46].In this case,the sensitivity to the fuzzifier and the cluster count needs to be accounted for[3].

    Some commonly used internal indices include Calinski et al.[47],Davies et al.[48],Silhouette[49],Xie et al.[34],Dunn’s Index[50],and its modified versions such as Generalized[51],Alternative[52,53],and Modified[54]Dunn’s Index.Further internal measures are discussed in[42].Also in[55],10 validity indices specifically for fuzzy clustering are reviewed.

    1.2 Observations and Rationale

    Validity indices competently deliver the essential inherent properties.Suggested methods measure the criteria through the inter-cluster and intra-cluster variances,entropies,and the like,evaluate the(dis)similarities,scrutinize the analysis as a whole,and then approximate the overall validity.However,it should be noted that the main clustering decisions may be made on some samples(the size issue),or distorted sets (the measuring-error issue).It was emphasized in [3]p.596 that,internal cluster validation “is more difficult than assessing the model fit or quality of prediction in regression or supervised classification,because in cluster analysis,normally true class information is not available”.

    Unsupervised methods optimize their objective functions over the established criteria.Yet maybe such functions,to some extent,are measuring their conformity with the underlying clustering method,as much as the quality of the clustering result[56].Moreover,a method may likely favor clusterings realized on the same criteria as itself.A detailed discussion on some validity indices that produce misleading results for the k-means algorithm on data with skewed distributions is presented in [42].Also[57]shows that some analyzed indices were observed to be biased in certain conditions.

    Furthermore,clustering is a classification kind process applied under uncertainty.Algorithms of the purpose are designed to partition given data,even though there should be no meaningful clusters in a random population[28],[31].Since the number of partitions can theoretically go up to the number of data points,methods based mainly on improving (dis)similarity criteria by ignoring a confidence parameter may result in over-partitioning.Besides,in the validation of fuzzy clusterings,statistics of data points assigned with higher degrees of confidence have a subordinate relevance for credence,since they comprise limited information about uncertainty,that indeed appears in weakly assigned members.

    Clustering is most probably part of a larger application.The decisive need in such an ambiguous case heavily based on statistics is a plausible method that estimates the probability margin of the anticipated actual error.In the absence of such an estimate,the resultant clustering has acquiesced as is.By with such a surmise,any probable clustering error would be dragged forward to the later stages of the system,regardless of the technique employed.Conversely,having such a measure of uncertainty,for a single observation or the whole clustering,would be very instrumental.Therefore,the accuracy performance of the performed clustering could be better approached by estimating the degree of uncertainty,apart from validity.Nevertheless,uncertainty and validity are not reciprocal.

    From these,we can conclude that a labeling decision is,in a sense,a statistical test of the hypothesis that a sample data point comes from a particular cluster population at some level of confidence.Including a confidence level premeditated according to the application needs,therefore,reduces the inherent uncertainty in evaluations.As the term validation suggests,the correct estimation of optimal partitions is the premise of a valid clustering so that fewer labeling errors.Thus,a higher correlation between the measured validity index and actual clustering error rate indicates a stronger monotonic relationship.

    1.3 Method and Organization

    Indeed respecting the fruitful efforts,revisiting the communal perception of the objective of validation in light of these considerations may be beneficial.As both intra-cluster similarity and inter-cluster dissimilarity increase,mean uncertainty in labeling should consequently decrease.Viz,the objective of clustering validation may be reformulated as minimizing the mean uncertainty in labeling assignments measured against defined criteria.Maximizing overall statistical confidence mathematically complies with maximizing the weights of the determined memberships in fuzzy clustering.This approach will also enable identifying data objects at dubiety state.The prominence of such information for other fields like machine learning is apparent.

    This study tries to address these issues and suggests a purposive measure that provides an estimate for the level of uncertainty in realized fuzzy-family clusterings.The ensuing section reformulates the objective function as optimization of intra-cluster discordance and inter-cluster proximity,over uncertainties.The measure can be derived mainly by perusing weakly classified members’appointed probability weights with a designated margin of confidence,as introduced below.

    The method has been tested on numerous samples in diverse distributions and the results were compared with some well-known internal validity indices.An exclusive comparative analysis of the correlations between the indices and the measured actual clustering errors is also presented here.In addition,acceptance tests were applied to the obtained results as well,which are generally neglected.

    In the remainder of this paper,Section 2 discusses the topic from the stated perspective,explains the envisaged methods,and introduces the proposed measures,Section 3 describes the details of the tests applied,including the acceptance tests,and Section 4 presents the obtained results.This is followed in Section 5 by the assessments and conclusions in respect to the canonical index properties set out.

    2 Materials and Method

    Relying on the discussions so far,we can view the decision to label a sample data point into a particular cluster as a statistical test of the hypothesis that it comes from that cluster population at a certain level of confidence.Including the confidence level in evaluations will help manage inherent uncertainty in decisions.Established on this approach,the purpose of a validity index can be reformulated as minimizing mean uncertainty of the overall clustering process,measured on defined criteria.

    The objective function of the proposed novel internal clustering validity index reaches its minimum at the point where compactness and separation of clusters are optimal,accorded by the given level of confidence.Below first the necessary definitions are provided within this framework of the relevant mathematical rationale.Then,the method is explained in detail,and its main advantages over other indexes are broadly highlighted.

    2.1 Relative Uncertainty and Clustering

    Clustering is a decision-making process under uncertainty relying on quantitative criteria.Under uncertainty,every observation is subject to distortion,by measurement errors and/or various interfering effects.The distance of observation to the sample mean i.e.,Δm=|-m| is the absolute uncertainty.Yet,how large the absolute uncertainty compared to the mean defines below therelative uncertainty:[58]

    As the expression (1) suggests,theδmdecreases byΔm.To explore further,the commonly used k-means algorithm,as an example,tries to converge to local minima(through the Voronoi iteration)to reduce the sum of squared distances between the data points and the centroid[59].Yet,the fuzzy c-means‘FCM’algorithm introduces the additional fuzzification exponent parameter that decides the level of cluster fuzziness[12,13].By minimizing the objective function,the data points are associated with every cluster withmcweights(respecting the rule ofΣmc=1,0 ≤mc≤1,1 ≤c ≤k,k:number of clusters),as themvalues indicate the membership probabilities.Ultimately the points are assigned to the clusters of the highest probabilities,i.e.,mia=max(mic)is the assignment membership weight.The relevance of the assigned miavalues depends on the decided number of clusterskand the distances to cluster centroids determined by the associated data points.Accordingly,as the accuracy of miavalues enhances,the meanΔm,and correspondinglyδmdecreases,so does the mean relative uncertainty of the system.

    The above remarks lead us to the hypothesis that when the data set is divided into the optimal number of clustersKand the points are labeled to establish this optimality,the overall relative uncertainty of the system is expected to be at a minimum.Therefore,the validity of clusterings can be estimated by measuring and comparing the total relative uncertainties.Furthermore,from the uncertainty perspective,the magnitude of the fuzzy assignment weight is statistically irrelevant,as long as it satisfies the desired confidence level.Hence the question ofvaliditycan be reformulated as,whether the hypothesis that the data points truly come from the associated cluster populations proves,at a certain level of confidence.As with the number of assumed partitionskcan theoretically go up to the number of data pointsn,it can be suspected that basing the validity method solely on improving similarity and dissimilarity criteria ignoring the confidence and uncertainty parameters may lead to over-partitioning,defined as k>K.

    This requires designating an interval that specifies the upper and lower margins of error,for a given confidence level.Moreover,for our topic,a weight beyond the upper bound is an indication that no cluster other than the one yielding the highest probability is close enough to be graded(i.e.,astrongmembership).Therefore,we only need to consider here the lower bound when determining uncertainties.Any association below this bound can be assumedweakand thus uncertain.

    2.2 Uncertainty Boundary and Weak Members

    Themprobabilities essentially depend on the intra-cluster and inter-cluster variances (i.e.,thecompactness,andseparation).Due to numerous detrimental effects,data points may shift too close to a neighbor cluster’s boundary,which could cause labeling errors.Such 2 points are marked in Fig.2.Here a 3D example reference space is assumed,which can be easily abstracted to N dimensions.Different colors represent different clusters.Thedaanddnare the vectorial distances of the points on the similarity measure to the assigned and the next-most-probable clusters’centroids ciaand cinrespectively.That means,for data pointi,dia=||i-cia||and din=||i-cin||.

    Figure 2:Peripheral weak members

    The members on the periphery of a cluster are the ones being under the most risk of incorrect labeling if there exists another cluster too close.Ipso facto,suchweak membersare assigned with lowermweights.Deciding an uncertainty boundary(will be denoted byδb)gives us the capability of classifying the members assigned with lower weights asweak members.Say{w}represents the weakmembers set.Then a data pointiwith an assignment(i.e.,the maximum)membership weight mialess thanδbis to be identified as a weak member.Below the expression(2)provides the formal definition of a weak member.

    The compactness of every cluster could vary due to different distribution effects,so a-roughly derived-overallδbwould be misleading.Thus uncertainty boundaries should be exclusively computed on a per-cluster basis for the desired confidence level (which is 1-α),and the weak members are identified in accordance.Eq.(3)gives the general formula of the uncertainty boundary.Here μmandσmrepresent the mean and standard deviation ofmweights,zαis the reference z-score for designated confidence(1-α),andnis the number of members.

    2.3 Dubiety Boundary and Members in Dubiety

    When all data points of a cluster are right at their centroids then μ(da) andσ(da) should return zero.For all cases other than this theoretical one,μ(da)>0.On the other hand,if dia/din=1 then the pointiis at thelimbussince it is at an equal distance to both centroids as expression(4)depicts.Thelimbostate manifests the highest level of uncertainty.

    In the‘FCM’algorithm[12,13],the membership weight miaof data pointiin the clusterais defined as below in Eq.(5)(k:number of clusters,ca,cc:centroids,e:fuzzification exponent):

    In this equation,||i-ca||and||i-cc||are essentially consistent with the above-mentioned distances.Then,dia/dicgives the proportion of the distances to different cluster centroids.As this proportion gets closer to 1,the anticipated confidence level in the classification decision for pointidecreases.Correspondingly,let us say miaand minbe the maximum and the next highest fuzzy membership weights ofirespectively.So,

    Aproximityvalueγclose to 1 in Eq.(6)indicates that it is not clear which cluster the data point belongs to(which corresponds to the case of a Silhouette index close to zero).Analogous to Eq.(2),if represents the set of members in dubiety state,we can designate the members,as in Eq.(7):

    This means that cluster members withγivalues less than or equal to the dubiety boundary constitute the set of members in dubiety state adjacent to other clusters.The following definition formally provides a generic boundary to identify data points in dubiety state,with the foregoing‘limbus’at the lowest end.

    Based on Chebyshev’s inequality,the probability of the observations beingrstandard deviations away from the mean is at least(1-1/r2)[58].Eq.(9)justifies the dubiety boundary definition in Eq.(8).

    Hence,proximity values lower than or equal tobboundary should fall into the dubiety range.Formulating the dubiety boundary based on Chebyshev’s inequality makes it valid not only for Gaussian but for a wide class of arbitrary probability distributions[60].

    2.4 The Relative Uncertainty Index of Clustering

    The optimal number of partitions (K) of a given dataset at the desired level of confidence is expected to achieve where the overall uncertainty rate is minimum.Theoretically,kcan go up tonas it is presumed that the data set is suitable for partitioning.1It may be necessary to decide on cluster tendency first(please cf.related references in the previous section).So,the relative uncertainty index(that will be referred to as‘rU’hereafter)forkpartitions can be formulated as in Eqs.(10)and(11):

    Here,πcandγcrespectively denote thediscordance(as the opposite ofcoherence)andproximity(as the opposite ofseparation)of clusterc.Thus the objective function is to minimize the rUkvalue as in Eq.(11).The cluster discordance can be computed through Eqs.(12)and(13)given below.

    δbaand ncare theuncertainty boundaryand the number of members of the clusterc.Also,tαis the student’st-value corresponding toα,and smais the standard deviation of mavalues(i.e.,normalized miaweights,please cf.Eq.(19))for cluster members.Since(Δmia/)is the intrinsic relative uncertainty in the pointi’s membership weight (cf.Eq.(1)),then (δi×nc) in Eq.(12) gives the total relative uncertainty ofc,which represents the intra-clusterdiscordance.

    On the other hand,referring to Eq.(6),clusterproximitybased on uncertainties can be computed as:

    Likeδbain Eq.(13),δbnin Eq.(18) is theadjacencyboundary for the mivalues (i.e.,normalized minweights,cf.Eq.(19)),and smnis the standard deviation of mnvalues.So in Eqs.(16)and(17),ma<δbafor data pointiindicates thatiis in the weak members set{wc},and similarly if mn>δbnit is in the adjacency set{ac}.Union of them constitutes theproximityset{γc}of clustercdefined in Eq.(15).The mean of theγimatrix of{γc}members thus corresponds to the averageseparationfrom nearest clusters,i.e.,Eq.(14)(npis the set members count).

    An important issue to note is that all of the raw miaand minmembership weights must be normalized first by their standard deviations as described in Eq.(19)below(where mixhere denotes miaor min)to make the cluster-wise calculations invariant to thekchanges.Such normalization makes all membership values more robust to their monotonic dependence onkand sensitivity to the fuzzification exponent,so that serves to prevent under partitioning by using the same order of magnitudes.(A section is devoted to the importance and effects of measure normalization in[42].)

    In epitome,the unitless relative uncertainty index rUkprovides a blind estimate of the average relative uncertainty of the fuzzy clustering operation for k partitions.The optimum number of partitions K is the k where rUkreaches its minimum,by minimizing theπcwhile maximizing theγc.

    It should be heeded that being the desired confidence level(1-α)can be set gives the additional ability to adjust the index according to the respective application.Moreover,besides providing an estimation of the optimum number of partitions,the presented ‘rU’index can specifically identify members in dubiety and the weak members of each cluster(via Eq.(7),and Eq.(16)).

    2.5 Index Properties

    Some desirable properties of an internal validity index can be deemed as follows:

    ? Reliability:A reliable index is expected to yield consistent results on data with as many different underlying frequency distributions as possible,without assuming Gaussian or such.

    ? Validity:Under and over partitioning would result in higher clustering errors.As the title suggests,the correct estimation of optimal k is the premise of valid clustering,so then fewer labeling errors.Hence,the calculated index is expected to correlate with measured true clustering errors.

    ? Robustness to over-partitioning:Since unsupervised methods attempt to partition data even it does not contain any clusters,the need to priorly apply cluster tendency tests arises[28,31].Thus,over(and under)partitioning is a threat the index should avoid.

    ? Independence of measure:With no stipulation on the measure(as Euclidean,Mahalanobis,or such),it is supposed to run independently of the underlying similarity measure.

    ? Informativeness:Calculation outcomes can be elucidative on intrinsic statistics of the data to serve further scopes.

    ? Optimality:Deriving adequate internal statistics from a sample set rather than the entire population data (which may not always be possible) is desirable.This practice would mean lower time and memory complexities.

    ? Efficiency:Another creditable property is the efficiency of the index,mainly in terms of time complexity,which may become quite significant especially in real-time applications.

    Numerous diverse tests were conducted mainly towards these canonical properties we suggest,and the results were also compared with some of the well-known internal validity indices,as shared below.Another point worth mentioning is thatacceptance testsas well,which are mostly neglected,were applied to the results obtained under different conditions.Tests are explained in the ensuing sections.

    3 Tests and Comparisons

    3.1 Test Data

    The tests were performed on various numeric structural data both artificially generated2The datasets generated during the current study are available from the corresponding author upon request.as well as retrieved from online repositories (e.g.,http://cs.joensuu.fi/sipu/datasets,https://www.kaggle.com/datasets,https://data.world/datasets/clustering,http://archive.ics.uci.edu/ml,with files such as S2,S3 [61],A1,A2 [62],Mall_Customers.csv,etc.).3Specifics of the test data were 2 ≤D ≤13(one with D=912),50 ≤n ≤5250,3 ≤K ≤35(where‘D’,‘n’,and‘K’here respectively represent dimension,size,and number of designed partitions),in distributions Gaussian,Poisson,Weibull,Nonspecific.Being the test data in different distributions was particularly mulled.

    3.2 Clustering

    Clustering was carried out using the ‘FCM’algorithm,with random initiation.A non-partial clustering was assumed,thus no group was left empty when all of the points were eventually assigned.Also,the data was assumed non-overlapping,so then the data points were associated with the cluster of the highest membership probability.The fuzzification exponent parameter was 2.0,the minimum improvement per iteration was 10-6,and the maximum number of iterations was 200.Parameters were selected in common standards for all subjects,as no significant effects were observed on validation tests.

    3.3 K-estimation Tests

    For all cases,validity metrics were computed for the ‘rU’index,as well as 8 additional wellknown internal indices,namely Calinski-Harabasz,Davies-Bouldin,Silhouette,Dunn,Alternative,Generalized,and Modified Dunn,besides Xie-Beni.(Abbreviations CH,DB,S,D,AD,GD,MD,XB are used to respectively stand for these indices throughout this text.)Every set was tested at least 35 times for each K,as incrementing n in steps of 25.Besides,the whole clustering and index computation process was repeated at least 25 times before determining the optimal k on average,to avoid the possible effect of the random initiation in the‘FCM’.

    3.4 Correlation Analysis

    For correlation analysis,additional artificial noisy data sets were generated in Gaussian,Weibull,and Poisson probability density distributions (each with K=8,n>600),for Gaussians to represent the middle,and the other 2 represent two possible extremes.Various tests were conducted on at least 600 different sets for every noise environment.

    Correlation coefficients were computed to scrutinize the strength of the relationship between the relative behaviors of the measured accuracy performances and the validity indices including rU.The specified confidence level for correlation was 95%during the tests,and the maximum acceptable performance error wasFPr<0.05(please see the related title below).

    Pearson’s product-moment correlation coefficients were assumed principal,yet Spearman’s rank correlation coefficients were evaluated as well.Computed coefficient magnitudes were interpreted as±r2indicating:.0-.19:very weak,.20-.39:weak,.40-.59:moderate,.60-.79:strong,.80-1.0:very strong[63].

    3.5 Performance Criteria

    Clustering accuracy performance was measured by comparing the estimated and provided ground-truth labels using two external indices depicted below:

    ?FPr:The false-positive clustering error rate ‘FPr’identifies the rate of false-positive (FP)mappings obtained from the comparisons that can be stated as in Eq.(20) (ciois the original ground-truth label,ciais the assigned label,andnis the number of data points in the population).

    When a complete(i.e.,non-partial)clustering was assumed,as all of the points got assigned,no group was left empty.Since we were to compare against the ground-truth partitions,a non-overlapping point could not belong to more than one group simultaneously,and it is eventually assigned to a particular one,because of the complete clustering assumption.This means an FP assignment that occurred in some cluster should be the concurrent counterpart of a false-negative(FN)assignment in another cluster,and under these conditions,ΣFP=ΣFN.

    ?aRI:To cover the overlaps,also the Adjusted-Rand index[40]was computed,which is a version of the Rand index adjusted for chance,as described in[38].

    3.6 Acceptance Tests

    Even if significant correlation coefficients satisfying the confidence level were observed in all studied environments,the significance of the differences between the correlation coefficients obtained in different environments should also be tested before the correlation can be eventually accepted.(We encourage to confer to[64].)Hence,the hypotheses have been established to test the significance of the differences asH0:ρ1-ρ2=0andH1:ρ1-ρ2>0(ρbeing the correlation coefficient).

    Since the sample populations were gathered from three principal environments(namely Gaussian,Poisson,and Weibull),the tests were conducted for every pair,let’s sayρg-ρp,ρp-ρw,andρg-ρw.Afterward,if all three of the pairwise tests returned that theH0was not rejected at the desired level of confidence then the consistency of the correlation decision was accepted for that variable;it was rejected otherwise.

    For the null-hypothesis tests,the criterion of significance was assumed as 0.05 (i.e.,zα=1.96).The tests were applied separately both to the attained Pearson and Spearman correlation coefficients for every variable in question.Pairwise z-scores zgp,zpw,zgwwere computed and compared with the reference z-scorezα.In conclusion,if zxy<zαthen the null hypothesis could not be rejected at the specified level of confidence,thus the differences between the predictive values were not of significance.(Note that attaining all 6 separate zxy<zαresults was mandatory for accepting the consistency of a single variable.)

    The collected results from the tests are summarized and discussed below.

    4 Results and Discussions

    4.1 K Estimations

    The optimum number of partitionskestimations by the proposed‘rU’index were quite consistent with the other indices.Fig.3 exemplifies divergences between k-estimates and true K on 148 samples of D=2,160 ≤n ≤480,3 ≤K ≤15,in different distributions.The bars indicate the absolute mean divergence percentages of K estimations from the true K.

    Figure 3:K-k divergences

    However,peculiar cases that deserve to be marked were also observed.Fig.4 (left-hand side)shows some sample data (D=2,n=160,K=4) in clusters using different colors (Red crosses point the centroids;the weak members are marked with magenta circles,and members in dubiety with black diamonds).On the right-hand side,the optimum number of partitions estimated by the indices is marked with red circles(occurring at the minimum or maximum depending on the objective function of the related index).

    Figure 4:Clustering and K estimations of data set m2_4_160

    In this example,almost all tested indices correctly estimated the designed 4 partitions when examined within the range of 2<k<12.However,most indices returned higher estimates when the inspection window was expanded,as pictured in Fig.5.The left-hand side of the figure shows the estimations for the range of 2-22,while the right-hand side shows the range of 2-74.Even occasionally,more than 1 local optimum may appear(as can be seen with the‘D’and‘MD’indices in k:2-22 range).

    Figure 5:Over partitioning of m2_4_160

    Several such peculiarities were encountered during tests.This observation tells that even if the optimumKvalue can be accurately determined within a given range,over partitioning can occur when the range is expended without considering the uncertainties in the putative clusters.

    4.2 Correlation Results

    Conducted tests to explore relationships between the considered indices and the true errors returned various levels of correlations,as listed in Tab.1(G,P,and W respectively denote the Gaussian,Poisson,and Weibull distribution environments).As it can be noticed in the table,the proposed‘rU’index exhibited the highest(very strong)correlations among all in each noise environment tested.

    Table 1:Correlations results

    4.3 Complexity Comparison

    The ‘rU’index uses only the membership weights,while many of the others need to reach the data itself.It can be counted as a benefit in terms of memory requirements.Besides,performing primary calculations only on the weak members also marks the additional optimality of the ‘rU’.This optimality reflects on the time complexity as,the‘rU’index involves only the uni-dimensional 1stand 2ndlevel membership weight matrices for the calculations,regardless of the data dimension.Hence the time complexity of distance calculations,such as compactness and separation,increases at some indices according to the data dimension,while in the ‘rU’it remains monotonically dependent only on the size.

    Tab.2 implies that the time complexity of the ‘rU’is considerably better compared to many[51,54,65].Consequently,observed run times vindicated the relative efficiency of the‘rU’.While the‘S’index exhibited the far worst runtimes of all,the‘XB’yielded faster processing than the‘rU’,yet it showed weaker correlations with the measured clustering errors,besides failed in the acceptance tests.

    Table 2:Time complexities

    4.4 Acceptance Tests

    It was also observed that the‘rU’index passed all acceptance tests,while the Xie-Beni index failed in some cases due to higher z-differences over the critical value(z=1.96),as seen in Tab.3.This casts doubt that such indices may be likely to exhibit instabilities on some data distributions.

    Table 3:Acceptance test results

    5 Conclusion

    The prevalence of research on clustering and cluster validation,which are among the essential tools of data analytics,reveals the importance of the topic.Although the clustering process is exercised under uncertainty,and validity indices provide explanatory insights into the intrinsic properties of the data,they do not deliver any quantitative assessment of the possible uncertainties in the realized partitioning.They may also be biased because they measure,to some extent,their conformity with the underlying clustering criteria.Several studies report such misleading results.Besides,it is also observed that ignoring the required confidence level may lead to over partitioning.On the other hand,considering a confidence parameter can reduce the risk of over partitioning,thereby better preventing potential errors from being carried towards subsequent phases of applications like machine learning where clustering is likely a part of.Moreover,having an uncertainty estimation can be also utilized to refine predictive functions.

    In light of these motivations,this study mainly focuses on clustering validation and proposes a novel internal method designed specifically for fuzzy clustering,based on relative uncertainty.Through discussions of the “validity of validation” itself,it incorporates the confidence parameter to the evaluation of criteria and reestablishes the objective as optimization of the level of uncertainty.To this end,here introduced the mathematical definition of weak members,as well as the method of estimating uncertainty,apart from validity.The method is independent of size,underlying density distribution,similarity measure,or clustering algorithm that membership weights are obtained with.

    The proposed relative uncertainty index,named ‘rU’,satisfies all canonical aspects of an unsupervised validity measure.Inclusive tests and comparisons have shown that it can reliably estimate the optimum number of partitions in different data distributions without stipulating Gaussian or any other and it is comparatively robust to over partitioning.It uses only one-dimensional membership weights regardless of the data dimensions,similarity measure,and the like.The impact of the resultant low complexity from this optimality has also been positively observed in runtime efficiencies.

    A competent clustering validity measure is expected to correlate with true error rates.Relevant tests have also shown the highest strong correlations between true error statistics and the‘rU’index in all test environments,compared to the others.This relationship has been proven consistent through statistical acceptance tests as well,which is mostly neglected.

    Along with the mentioned contributions,it is probably the first fuzzy validation measure adjustable to the application needs,as includes the confidence parameter in the evaluation.The uncertainty measure as an error probability estimate can be used as a metric as well,in the credence of the realized fuzzy clustering.Another significant novelty that can be quite instrumental for tuning the later stages of the system involved is the ability to exclusively identify the weak members in the designated clusters.

    The efficacy of the proposed method is most likely to manifest itself especially with unbalanced or distorted distributions with higher uncertainty.However,the fact that it was specially designed for fuzzy clustering should be noted as a limitation.Besides,datasets more suitable for hierarchical clustering need to be studied separately.A further research opportunity opened is associating the provided uncertainty estimate to an appertaining cost function for incorporating the risk assessment into classification decision rules.Also,designing a layered iterative classification algorithm that makes use of the diagnosis of weak members is another promising future work.

    Acknowledgement:The authors would like to thank the editors for reviewing this manuscript.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久热在线av| 最近的中文字幕免费完整| 18在线观看网站| 亚洲精品一二三| 最近的中文字幕免费完整| 国产精品一区二区在线不卡| 成人手机av| 精品人妻熟女毛片av久久网站| 十八禁高潮呻吟视频| 99九九在线精品视频| 久久久久久久久免费视频了| av福利片在线| 十八禁网站网址无遮挡| 九九爱精品视频在线观看| 美女福利国产在线| 久久久久久久久免费视频了| 99热网站在线观看| 日韩av不卡免费在线播放| 欧美激情 高清一区二区三区| 丝瓜视频免费看黄片| 777久久人妻少妇嫩草av网站| 亚洲av.av天堂| 日韩一本色道免费dvd| 免费在线观看完整版高清| 在线观看一区二区三区激情| 18禁动态无遮挡网站| 一区二区三区乱码不卡18| 啦啦啦在线观看免费高清www| 欧美精品一区二区免费开放| 午夜福利在线免费观看网站| 日本91视频免费播放| 最近的中文字幕免费完整| 亚洲国产最新在线播放| 天堂8中文在线网| 国产男人的电影天堂91| 国产黄频视频在线观看| 欧美激情高清一区二区三区 | 性色avwww在线观看| 久久久久人妻精品一区果冻| 国产极品天堂在线| 精品国产露脸久久av麻豆| 一区福利在线观看| 免费看av在线观看网站| av电影中文网址| 久久久久久久精品精品| 丰满迷人的少妇在线观看| 啦啦啦在线观看免费高清www| 菩萨蛮人人尽说江南好唐韦庄| 母亲3免费完整高清在线观看 | 狠狠婷婷综合久久久久久88av| 久久精品国产亚洲av涩爱| 久久国产精品男人的天堂亚洲| 国产爽快片一区二区三区| 国产片特级美女逼逼视频| 一二三四中文在线观看免费高清| 亚洲精品国产色婷婷电影| 亚洲欧洲国产日韩| 最新的欧美精品一区二区| 最近中文字幕2019免费版| a级片在线免费高清观看视频| 欧美av亚洲av综合av国产av | 日本欧美国产在线视频| 王馨瑶露胸无遮挡在线观看| 男女国产视频网站| 各种免费的搞黄视频| 欧美日韩一级在线毛片| av在线观看视频网站免费| 男女免费视频国产| 18禁国产床啪视频网站| 少妇精品久久久久久久| 亚洲国产成人一精品久久久| 最近的中文字幕免费完整| 夜夜骑夜夜射夜夜干| 韩国av在线不卡| 老司机影院成人| 国产1区2区3区精品| 亚洲 欧美一区二区三区| 日韩av免费高清视频| 精品一区二区三区四区五区乱码 | 国产成人精品婷婷| 精品国产一区二区久久| 如日韩欧美国产精品一区二区三区| 精品人妻偷拍中文字幕| 成人毛片a级毛片在线播放| 国产av码专区亚洲av| 99re6热这里在线精品视频| 久久av网站| 精品人妻在线不人妻| tube8黄色片| 精品亚洲乱码少妇综合久久| 纯流量卡能插随身wifi吗| 国产乱来视频区| 亚洲欧美一区二区三区国产| 国产成人aa在线观看| 春色校园在线视频观看| 大片免费播放器 马上看| 亚洲色图综合在线观看| 欧美日韩视频高清一区二区三区二| 国产精品欧美亚洲77777| 成人国语在线视频| 性少妇av在线| 日日撸夜夜添| 桃花免费在线播放| 精品一区二区三区四区五区乱码 | 久久ye,这里只有精品| 可以免费在线观看a视频的电影网站 | 一区二区三区乱码不卡18| 99九九在线精品视频| 国产成人a∨麻豆精品| 亚洲,一卡二卡三卡| 亚洲,一卡二卡三卡| 欧美另类一区| 精品午夜福利在线看| 日韩精品有码人妻一区| 成人黄色视频免费在线看| 亚洲一区二区三区欧美精品| 街头女战士在线观看网站| 国产极品天堂在线| 国产精品偷伦视频观看了| 青草久久国产| 黄片无遮挡物在线观看| 国产一区二区三区av在线| 国产黄频视频在线观看| 久久精品亚洲av国产电影网| 男人操女人黄网站| 国产精品成人在线| 成人手机av| 欧美日韩视频精品一区| 亚洲熟女精品中文字幕| 国产欧美日韩综合在线一区二区| 最黄视频免费看| 99热国产这里只有精品6| 国产熟女午夜一区二区三区| 哪个播放器可以免费观看大片| 国产精品国产三级专区第一集| 男女免费视频国产| 满18在线观看网站| 亚洲av日韩在线播放| 亚洲欧美一区二区三区黑人 | 99久国产av精品国产电影| 久久久久网色| 天堂8中文在线网| 99九九在线精品视频| 天天躁日日躁夜夜躁夜夜| 不卡av一区二区三区| 天堂俺去俺来也www色官网| 男女午夜视频在线观看| 考比视频在线观看| 欧美人与善性xxx| 国产欧美亚洲国产| 亚洲av日韩在线播放| 久久国产精品大桥未久av| 久久久久精品人妻al黑| 免费高清在线观看日韩| 午夜福利网站1000一区二区三区| 日韩一本色道免费dvd| 观看av在线不卡| 日韩大片免费观看网站| 自拍欧美九色日韩亚洲蝌蚪91| av天堂久久9| 嫩草影院入口| 国产成人aa在线观看| 纯流量卡能插随身wifi吗| tube8黄色片| a级片在线免费高清观看视频| 伦精品一区二区三区| 免费大片黄手机在线观看| a级毛片在线看网站| 精品一区二区三区四区五区乱码 | 国产一区二区 视频在线| 免费观看性生交大片5| 我要看黄色一级片免费的| 日韩中文字幕欧美一区二区 | 免费观看a级毛片全部| 成人国产麻豆网| 亚洲av福利一区| 久久综合国产亚洲精品| 两个人看的免费小视频| 国产极品粉嫩免费观看在线| 欧美bdsm另类| 制服诱惑二区| 日日撸夜夜添| 天美传媒精品一区二区| 亚洲成人手机| 欧美 亚洲 国产 日韩一| 亚洲国产最新在线播放| 美女国产高潮福利片在线看| 九色亚洲精品在线播放| 国产一级毛片在线| 久久久精品区二区三区| 欧美日韩亚洲高清精品| 免费黄频网站在线观看国产| 人妻一区二区av| 一级毛片黄色毛片免费观看视频| 日韩av在线免费看完整版不卡| 国产熟女午夜一区二区三区| 一本—道久久a久久精品蜜桃钙片| 欧美中文综合在线视频| 国产av国产精品国产| 国产精品熟女久久久久浪| 婷婷成人精品国产| 久久久a久久爽久久v久久| h视频一区二区三区| 90打野战视频偷拍视频| 亚洲av中文av极速乱| 日韩一卡2卡3卡4卡2021年| 午夜激情久久久久久久| 久久99热这里只频精品6学生| 一本久久精品| 久久精品夜色国产| 中文字幕亚洲精品专区| 国产免费又黄又爽又色| 欧美人与善性xxx| 香蕉丝袜av| 在线观看免费日韩欧美大片| 中文精品一卡2卡3卡4更新| 国产成人a∨麻豆精品| 狂野欧美激情性bbbbbb| 韩国精品一区二区三区| 亚洲精品久久成人aⅴ小说| 国产成人精品在线电影| 精品国产乱码久久久久久男人| 婷婷色麻豆天堂久久| 亚洲综合色惰| 国产人伦9x9x在线观看 | 熟女av电影| 你懂的网址亚洲精品在线观看| 婷婷色麻豆天堂久久| 99热国产这里只有精品6| 有码 亚洲区| 国产一区二区 视频在线| 免费在线观看视频国产中文字幕亚洲 | 国产亚洲欧美精品永久| 免费观看性生交大片5| 在线 av 中文字幕| 高清在线视频一区二区三区| 精品一区二区三卡| 欧美激情极品国产一区二区三区| 国产成人精品久久二区二区91 | 国产不卡av网站在线观看| 成人亚洲欧美一区二区av| 一级片免费观看大全| 国产欧美日韩一区二区三区在线| 欧美日韩亚洲高清精品| 午夜福利乱码中文字幕| 国产一区二区三区综合在线观看| 大话2 男鬼变身卡| 青草久久国产| 亚洲国产欧美在线一区| 日本av手机在线免费观看| 欧美日韩av久久| 国产野战对白在线观看| 国产欧美亚洲国产| 街头女战士在线观看网站| 日本猛色少妇xxxxx猛交久久| 国产精品久久久久久av不卡| 黄网站色视频无遮挡免费观看| 日日啪夜夜爽| 女人久久www免费人成看片| 久久久久久久久久久久大奶| 中文精品一卡2卡3卡4更新| 看免费av毛片| 日韩一区二区三区影片| 巨乳人妻的诱惑在线观看| 欧美日韩综合久久久久久| 日韩中字成人| 校园人妻丝袜中文字幕| 国产精品免费视频内射| 日本欧美视频一区| 精品国产露脸久久av麻豆| 日本vs欧美在线观看视频| av国产精品久久久久影院| 国产在线一区二区三区精| 高清视频免费观看一区二区| 青草久久国产| 亚洲精品久久午夜乱码| 国产精品熟女久久久久浪| 欧美亚洲日本最大视频资源| 80岁老熟妇乱子伦牲交| 在线天堂最新版资源| 视频区图区小说| 看免费av毛片| 国产精品久久久久久精品古装| 少妇的逼水好多| 精品国产一区二区三区久久久樱花| 一区二区三区精品91| 中文精品一卡2卡3卡4更新| 人人妻人人添人人爽欧美一区卜| 人人澡人人妻人| av国产久精品久网站免费入址| av不卡在线播放| 国产亚洲一区二区精品| 另类亚洲欧美激情| 黄频高清免费视频| 一区二区av电影网| av一本久久久久| 午夜福利影视在线免费观看| 女的被弄到高潮叫床怎么办| 亚洲精品久久成人aⅴ小说| 中文字幕另类日韩欧美亚洲嫩草| 视频区图区小说| 亚洲国产欧美网| 久久精品国产亚洲av天美| 久久女婷五月综合色啪小说| 亚洲美女视频黄频| 91精品伊人久久大香线蕉| 999精品在线视频| 欧美黄色片欧美黄色片| 母亲3免费完整高清在线观看 | 日韩伦理黄色片| 亚洲av电影在线观看一区二区三区| 秋霞在线观看毛片| 少妇被粗大的猛进出69影院| 在线天堂中文资源库| 亚洲人成电影观看| 亚洲精品国产色婷婷电影| 下体分泌物呈黄色| 欧美97在线视频| 欧美日韩综合久久久久久| 男人操女人黄网站| 国产亚洲午夜精品一区二区久久| 亚洲欧美中文字幕日韩二区| av在线app专区| 国产熟女欧美一区二区| 成人国语在线视频| av不卡在线播放| 免费黄频网站在线观看国产| 亚洲成人av在线免费| 国产一区二区在线观看av| 久久 成人 亚洲| 亚洲精品国产av蜜桃| 国产伦理片在线播放av一区| 亚洲av中文av极速乱| 国产一区二区 视频在线| 久久精品久久久久久久性| 国产精品二区激情视频| 2022亚洲国产成人精品| 丰满乱子伦码专区| 亚洲国产色片| 久久久久久久亚洲中文字幕| 国产av码专区亚洲av| 欧美精品国产亚洲| 丝袜脚勾引网站| 午夜免费观看性视频| 一级,二级,三级黄色视频| 国产精品免费大片| 欧美精品亚洲一区二区| 久久毛片免费看一区二区三区| 国产极品天堂在线| 国产片内射在线| 九色亚洲精品在线播放| 尾随美女入室| 午夜av观看不卡| 一区二区三区四区激情视频| 女人被躁到高潮嗷嗷叫费观| 91精品三级在线观看| 日韩中字成人| 国精品久久久久久国模美| 国产精品蜜桃在线观看| 波野结衣二区三区在线| 免费在线观看黄色视频的| 久久97久久精品| 18禁动态无遮挡网站| 人体艺术视频欧美日本| 国产成人91sexporn| 久久午夜福利片| 国产一区有黄有色的免费视频| 国产精品 国内视频| 99香蕉大伊视频| 久久97久久精品| 天天躁夜夜躁狠狠躁躁| 精品国产一区二区三区久久久樱花| 日本vs欧美在线观看视频| 91精品国产国语对白视频| 日韩制服丝袜自拍偷拍| 国产白丝娇喘喷水9色精品| 亚洲欧洲国产日韩| 亚洲成人手机| 欧美精品高潮呻吟av久久| 日韩人妻精品一区2区三区| 精品第一国产精品| 老汉色∧v一级毛片| 国产白丝娇喘喷水9色精品| 一级毛片 在线播放| 99久国产av精品国产电影| 久久这里有精品视频免费| 伊人亚洲综合成人网| 久久热在线av| 999久久久国产精品视频| 色94色欧美一区二区| 亚洲成国产人片在线观看| 春色校园在线视频观看| 精品人妻在线不人妻| 哪个播放器可以免费观看大片| 一本久久精品| 午夜福利视频在线观看免费| 国产黄色免费在线视频| 久久久久久久精品精品| 黄色 视频免费看| 国产精品一区二区在线观看99| 国产一区亚洲一区在线观看| 国产精品免费视频内射| 亚洲国产精品999| 国产麻豆69| 狂野欧美激情性bbbbbb| 国产伦理片在线播放av一区| 美女主播在线视频| 99久久综合免费| 日本欧美视频一区| 亚洲色图 男人天堂 中文字幕| 丰满少妇做爰视频| 天天躁狠狠躁夜夜躁狠狠躁| av网站免费在线观看视频| 国产1区2区3区精品| av在线老鸭窝| 婷婷色麻豆天堂久久| 亚洲情色 制服丝袜| 天天躁夜夜躁狠狠躁躁| av.在线天堂| 激情五月婷婷亚洲| 久热久热在线精品观看| 精品人妻熟女毛片av久久网站| 黄色一级大片看看| 亚洲伊人久久精品综合| 欧美少妇被猛烈插入视频| 亚洲精品第二区| 女人久久www免费人成看片| 午夜激情久久久久久久| 国产欧美日韩一区二区三区在线| 国产av码专区亚洲av| 一边亲一边摸免费视频| 久久久精品区二区三区| 熟女电影av网| 搡老乐熟女国产| 97在线人人人人妻| 久久免费观看电影| av在线播放精品| 91成人精品电影| 丝袜人妻中文字幕| 好男人视频免费观看在线| 欧美av亚洲av综合av国产av | 黑丝袜美女国产一区| 欧美中文综合在线视频| 久热久热在线精品观看| 亚洲欧美精品自产自拍| xxxhd国产人妻xxx| 亚洲三级黄色毛片| 欧美人与性动交α欧美精品济南到 | 26uuu在线亚洲综合色| 国产精品熟女久久久久浪| 亚洲,欧美精品.| 日日撸夜夜添| 秋霞在线观看毛片| 国产熟女欧美一区二区| 97在线视频观看| 国产在线一区二区三区精| 欧美另类一区| 中文精品一卡2卡3卡4更新| 久久久久久免费高清国产稀缺| 精品国产乱码久久久久久男人| 成人漫画全彩无遮挡| 极品少妇高潮喷水抽搐| 中文天堂在线官网| 最近中文字幕高清免费大全6| 美女午夜性视频免费| 国产欧美日韩综合在线一区二区| 亚洲欧美中文字幕日韩二区| 久久鲁丝午夜福利片| xxx大片免费视频| 亚洲国产精品国产精品| 日本wwww免费看| 午夜精品国产一区二区电影| 成人黄色视频免费在线看| 一区二区三区激情视频| 亚洲精品日韩在线中文字幕| 性色avwww在线观看| 精品酒店卫生间| 日韩成人av中文字幕在线观看| 久久国产精品男人的天堂亚洲| 国产精品一国产av| 97人妻天天添夜夜摸| 亚洲欧美精品自产自拍| 精品久久久精品久久久| 不卡视频在线观看欧美| 精品一区在线观看国产| 免费黄频网站在线观看国产| 免费观看a级毛片全部| 黄色视频在线播放观看不卡| 女人久久www免费人成看片| 男女高潮啪啪啪动态图| 精品人妻一区二区三区麻豆| 只有这里有精品99| 色婷婷久久久亚洲欧美| 欧美激情极品国产一区二区三区| 一级毛片电影观看| 国产精品不卡视频一区二区| av在线观看视频网站免费| 亚洲精品国产av成人精品| 中文字幕人妻丝袜一区二区 | 亚洲欧美精品自产自拍| 一区二区三区四区激情视频| 久久久精品免费免费高清| 看非洲黑人一级黄片| 国产精品免费视频内射| 成人亚洲欧美一区二区av| 综合色丁香网| h视频一区二区三区| 亚洲av.av天堂| 如何舔出高潮| 亚洲精品日韩在线中文字幕| 精品久久久精品久久久| av有码第一页| 天堂8中文在线网| 在线观看一区二区三区激情| 视频区图区小说| 日韩中文字幕欧美一区二区 | 性色avwww在线观看| 欧美激情极品国产一区二区三区| 亚洲精品国产色婷婷电影| 亚洲欧美色中文字幕在线| 成人国产麻豆网| 另类精品久久| 国产亚洲午夜精品一区二区久久| 国产av国产精品国产| 亚洲精品一二三| 亚洲欧洲日产国产| 在线观看一区二区三区激情| 久久精品久久久久久噜噜老黄| 免费黄色在线免费观看| 免费女性裸体啪啪无遮挡网站| 91精品国产国语对白视频| 人人妻人人爽人人添夜夜欢视频| 免费久久久久久久精品成人欧美视频| 极品少妇高潮喷水抽搐| 菩萨蛮人人尽说江南好唐韦庄| 在线亚洲精品国产二区图片欧美| 这个男人来自地球电影免费观看 | 日韩精品有码人妻一区| 亚洲三区欧美一区| 高清黄色对白视频在线免费看| 天天躁日日躁夜夜躁夜夜| av有码第一页| 国产高清国产精品国产三级| 一级爰片在线观看| 日韩精品有码人妻一区| 国产成人欧美| 黄片无遮挡物在线观看| 97精品久久久久久久久久精品| 夫妻性生交免费视频一级片| 日日摸夜夜添夜夜爱| 伦理电影大哥的女人| 免费观看性生交大片5| 黄色配什么色好看| 看免费av毛片| 夫妻性生交免费视频一级片| 亚洲欧美清纯卡通| 成年美女黄网站色视频大全免费| 热99国产精品久久久久久7| 黄色一级大片看看| 男男h啪啪无遮挡| 精品人妻在线不人妻| 自拍欧美九色日韩亚洲蝌蚪91| 精品亚洲成a人片在线观看| 国产成人免费观看mmmm| 久久久久久久亚洲中文字幕| 午夜久久久在线观看| 免费在线观看完整版高清| 国精品久久久久久国模美| 免费高清在线观看日韩| 五月天丁香电影| 只有这里有精品99| 国产精品久久久久久精品古装| 91国产中文字幕| 性色av一级| 日产精品乱码卡一卡2卡三| 久久精品夜色国产| 日韩一本色道免费dvd| 高清欧美精品videossex| 精品国产乱码久久久久久男人| 丰满少妇做爰视频| 久久久久久久久久人人人人人人| 亚洲精品av麻豆狂野| 欧美激情极品国产一区二区三区| av国产精品久久久久影院| 精品人妻熟女毛片av久久网站| 我的亚洲天堂| 国产伦理片在线播放av一区| 亚洲av电影在线观看一区二区三区| 午夜免费男女啪啪视频观看| 国产片内射在线| 免费看不卡的av| 久久午夜综合久久蜜桃| 考比视频在线观看| 搡老乐熟女国产| 国产成人一区二区在线| 久久久久久久久免费视频了| 在线精品无人区一区二区三| 人人妻人人添人人爽欧美一区卜| 深夜精品福利| 人妻系列 视频| 纵有疾风起免费观看全集完整版| 我要看黄色一级片免费的| 亚洲第一区二区三区不卡| 亚洲四区av| 永久免费av网站大全| 97人妻天天添夜夜摸| 麻豆av在线久日| 亚洲三区欧美一区| 街头女战士在线观看网站| 美女视频免费永久观看网站| 婷婷色综合大香蕉| 九九爱精品视频在线观看| 最新中文字幕久久久久| www.自偷自拍.com| 国产97色在线日韩免费|