• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Using Link-Based Consensus Clustering for Mixed-Type Data Analysis

    2022-11-09 08:17:32TossaponBoongoenandNatthakanIamOn
    Computers Materials&Continua 2022年1期

    Tossapon Boongoen and Natthakan Iam-On

    Center of Excellence in Artificial Intelligence and Emerging Technologies,School of Information Technology,Mae Fah Luang University,Chiang Rai,57100,Thailand

    Abstract: A mix between numerical and nominal data types commonly presents many modern-age data collections.Examples of these include banking data,sales history and healthcare records,where both continuous attributes like age and nominal ones like blood type are exploited to characterize account details,business transactions or individuals.However,only a few standard clustering techniques and consensus clustering methods are provided to examine such a data thus far.Given this insight,the paper introduces novel extensions of link-based cluster ensemble,LCEWCT and LCEWTQ that are accurate for analyzing mixed-type data.They promote diversity within an ensemble through different initializations of the k-prototypes algorithm as base clusterings and then refine the summarized data using a link-based approach.Based on the evaluation metric of NMI(Normalized Mutual Information)that is averaged across different combinations of benchmark datasets and experimental settings,these new models reach the improved level of 0.34,while the best model found in the literature obtains only around the mark of 0.24.Besides,parameter analysis included herein helps to enhance their performance even further,given relations of clustering quality and algorithmic variables specific to the underlying link-based models.Moreover,another significant factor of ensemble size is examined in such a way to justify a tradeoff between complexity and accuracy.

    Keywords: Cluster analysis;mixed-type data;consensus clustering;link analysis

    1 Introduction

    Cluster analysis has been widely used to explore the structure of a given dataset.This analytical tool is usually employed in the initial stage of data interpretation,especially for a new problem where prior knowledge is limited.The goal of acquiring knowledge from data sources has been a major driving force,which makes cluster analysis one of the highly active research subjects.Over several decades,different clustering techniques are devised and applied to a variety of problem domains,such as biological study [1],customer relationship management [2],information retrieval [3],image processing and machine vision [4],medicine and health care [5],pattern recognition [6],psychology [7] and recommender system [8].In addition to these,the recent development of clustering approaches for cancer gene expression data has attracted a lot of interests amongst computer scientists,biological and clinical researchers [9,10].

    Principally,the objective of cluster analysis is to divide data objects (or instances) into groups(or clusters) such that objects in the same cluster are more similar to each other than to those belonging to different clusters [11].Objects under examination are normally described in terms of object-specific (e.g.,attribute values) or relative measurements (e.g.,pairwise dissimilarity).Unlike supervised learning,clustering is ‘unsupervised’and does not require class information,which is typically achieved through a manual tagging of category labels on data objects,by domain expert(s).While many supervised models inherently fail to handle the absence of data labels,data clustering has proven effective for this burden.Given its potential,a large number of research studies focus on several aspects of cluster analysis: for instance,dissimilarity (or distance)metric [12],optimal cluster numbers [13],relevance of data attributes per cluster [14],evaluation of clustering results [15],cluster ensemble or consensus clustering [9],clustering algorithms and extensions for particular type of data [16].Specific to the lattermost to which this research belongs,only a few studies have concentrated on clustering of mixed-type (numerical and nominal)data,as compared to the cases of numeric and nominal only counterparts.

    At present,the data mining community has encountered a challenge from large collections of mixed-type data like those collected from banking and health sectors: web/service access records and biological-clinical data.As for the domain of health care,microarray expressions and clinical details are available for cancer diagnosis [17].In response,a few clustering techniques have been introduced in the literature for this problem.Some simply transform the underlying mixed-type data to either numeric or nominal only format,with which conventional clustering algorithms can be reused.In particular to this view,k-means [18] is a typical alter-native for the numerical domain,while dSqueezer [19] that is an extension of Squeezer [20] has been investigated for the other.Other attempts focus on defining a distance metric that is effective for the evaluation of dissimilarity amongst data objects in a mixed-type dimensional space.These include different extensions of k-means,k-prototypes [21] and k-centers [22],respectively.

    Similar to most clustering methods,the aforementioned models are parameterized,thus achieving optimal performance may not be possible across diverse data collections.At large,there are two major challenges inherent to mixed-type clustering algorithms.First,different techniques discover different structures (e.g.,cluster size and shape) from the same set of data [23-25].For example,those extensions of k-means are suitable for spherical-shape clusters.This is due to the fact that each individual algorithm is designed to optimize a specific criterion.Second,a single clustering algorithm with different parameter settings can also reveal various structures on the same dataset.A specific setting may be good for a few,but less accurate on other datasets.

    A solution to this dilemma is to combine different clusterings into a single consensus clustering.This process,known as consensus clustering or cluster ensemble,has been reported to provide more robust and stable solutions across different problem domains and datasets [9,24].Among state-of-the-art approaches,link-based cluster ensemble or LCE [26,27] usually deliver accurate clustering results,with respect to both numerical and nominal domains.Given this insight,the paper introduces the extension of LCE to mixed-type data clustering,with contributions being summarized as follows.Firstly,a new extension of LCE that makes use of k-prototypes as base clusterings is proposed.In particular,the resulting models have been assessed on benchmark datasets,and compared to both groups of basic and ensemble clustering techniques.Experimental results point out that the proposed extension usually outperforms those included in this empirical study.Secondly,parameter analysis with respect to algorithmic variables of LCE is conducted and emphasized as a guideline for further studies and applications.The rest of this paper is organized as follows.To set the scene for this work,Section 2 presents existing methods to mixed-type data clustering.Following that,Section 3 introduces the proposed extension of LCE,including ensemble generation and estimation of link-based similarity.To perceive its performance,the empirical evaluation in Section 4 is conducted on benchmark data sets,with a rich collection of compared techniques.The paper is concluded in Section 5 with the direction of future research.

    2 Mixed-Type Data Clustering Methods

    Following the success in numerical and nominal domains,a line of research has emerged with the focus on clustering mixed-type data.One of initial attempts is the model of k-prototypes,which extends the classical k-means to clustering mixed numeric and categorical data [21].It makes use of a heterogeneous proximity function to assess the dissimilarity between data objects and cluster prototypes (i.e.,cluster centroids).While the Euclidean distance is exploited for numerical case,the nominal dissimilarity can be directly derived from the number of mismatches between nominal values.This distance function for mixed-type data requires different weights for the contribution of numericalvs.nominal attributes to avoid favoring either type of attribute.LetX={x1,...,xN}be a set ofNdata objects and eachxi∈Xis described byDattributes,whereD=Dn+Dc,i.e.,the total number of numerical (Dn) and nominal (Dc) attributes.The distance between an objectxi∈Xand a cluster prototypeis estimated by the following equation.

    whereδ(y,z)=0 ify=zand 1,otherwise.In addition,γis a weight for nominal attributes.A largeγsuggests that the clustering process favors the nominal attributes,while a small value ofγindicates that numerical attributes are emphasized.

    Besides the aforementioned,k-centers [22] is an extension of the k-prototypes algorithm.It focuses on the effect of attribute values with different frequencies on clustering accuracy.Unlike k-prototypes that selects nominal attribute values that appear most frequently as centroids,kcenters also takes into account other attribute values with low frequency on centroids.Based on this idea,a new dissimilarity measure is defined.Specifically,the Euclidean distance is used for numerical attributes,while the nominal dissimilarity is derived from the similarity between corresponding nominal attributes.Letxi∈Xbe a data object described byDnnumerical attributes andDcnominal attributes.The domain of nominal attributeAgis denoted by {ag(1),ag(2),...,ag(ng)},wherengis the number of attribute values ofAg.The definition of the distance between data objectxiand centroidis defined as follows.

    wheref(xig,={cpg(r)|xig=apg(r)}.The weight parametersβandγare for numerical and nominal attributes,respectively.According to [22],βis set to be 1 while a greater weight is given forγif nominal valued attributes are emphasised more or a smaller value forγotherwise.The new definition of centroids is also introduced.For numerical attributes,a centroid is represented by the mean of attribute values.For nominal attributeAg,g∈Dc,centroidis anngdimensional vector denoted as(cpg(1),cpg(2),...,cpg(nj)),wherecpg(r)can be defined by the next equation.

    wherenpg(r)denotes the number of data objects in thepth cluster with attribute valueag(r).Note that if attribute valueag(r)does not exist in thepth cluster,cpg(r)=0.The problem of selecting an appropriate clustering algorithm or parameter setting of any potential alternative has proven difficult,especially with a new set of data.In such a case where prior knowledge is generally minimal,the performance of any particular method is inherently uncertain.To obtain a more robust and accurate outcome,consensus clustering has been put forward and extensively investigated in the past decade.However,while a large number of cluster ensemble techniques for numerical data have been developed [24,26,28-35],there are very few studies that extend such a methodology to mixed-type data clustering.Specific to this subject,the cluster ensemble framework of [36] uses the pairwise similarity concept [24],which is originally designed for continuous data.Though this research area has received a little attention thus far,it is crucial to explore the true potential of cluster ensembles for such a problem.This motivates the present research,with the link-based framework being developed and evaluated herein.

    3 Link-Based Consensus Clustering for Mixed-Type Data

    This section presents the proposed framework of LCE for mixed-type data.It includes details of conceptual model,ensemble generation strategies,link-based similarity measures,and consensus function that is used to create the final clustering result,respectively.

    3.1 Problem Definition

    LCE approach has been initially introduced for gene expression data analysis [9].Unlike other methods,it explicitly models base clustering results as a link network from which the relations between and within these partitions can be obtained.In the current research,this consensusclustering model is uniquely extended for the problem of clustering mixed-type data,which can be formulated as follows.Let ∏={π1,...,πM}be a cluster ensemble withMbase clusterings,each of which returns a set of clusterssuch thatwherekgis the number of clusters in thegth clustering.For eachxi∈X,Cg(xi)denotes the cluster label in thegth base clustering to which data objectxibelongs,i.e.,Cg(xi)=′t′ifxi∈Cgt.The problem is to find a new partitionπ*={C*1,...,C*K},whereKdenotes the number of clusters in the final clustering result,of a data setXthat summarizes the information from the cluster ensemble ∏.

    3.2 LCE Framework for Mixed-Type Data Clustering

    The extended LCE framework for the clustering of mixed-type data involves three steps: (i)creating a cluster ensemble ∏,(ii) aggregating base clustering results,πg(shù)∈∏,g=1...M,into a meta-level data matrixRAl(withlbeing the link-based similarity measure used to deliver the matrix),and (iii) generating the final data partitionπ*using the spectral graph partitioning(SPEC) algorithm.See Fig.1 for the illustration of this framework.

    Figure 1:Framework of LCE extension to mixed-type data clustering

    3.2.1 Generating Cluster Ensemble

    The proposed framework is generalized such that it can be coupled with several different ensemble generation methods.As for the present study,the following four types of ensembles are investigated.Unlike the original work in which the classical k-means is used to form base clusterings,the extended LCE obtains an ensemble by applying k-prototypes to mixed-type data(see Fig.1 for details).Each base clustering is initialized with a random set of cluster prototypes.Also,the variableγof k-prototypes is arbitrarily selected from the set of {0.1,0.2,0.3,...,5}.

    Full-space+Fixed-k:Eachπg(shù)∈∏,is formed using data setX∈RN×Dwith allDattributes.The number of clusters in each base clustering is fixed toIntuitively,to obtain a meaningful partition,k becomes 50 if

    Full-space + Random-k: Eachπg(shù)is obtained using the data set with all attributes,and the number of clusters is randomly selected from the setNote that both ‘Fixed-k’and‘Random-k’generation strategies are initially introduced in the primary work of [30].

    Subspace+Fixed-k: Eachπg(shù)is created using the data set with a subset of original attributes,and the number of clusters is fixed toFollowing the study of [37] and [38],a data subspaceX′∈RN×D′is selected from the original dataX∈RN×D,whereDis the number of original attributes andD′<D.In particular,D′is randomly chosen by the following.

    whereα∈[0,1] is a uniform random variable.Besides,DminandDmaxare user-specified parameters,which have the default values of 0.75 and 0.85D,respectively.

    Subspace+Random-k: Eachπg(shù)is generated using the dataset with a subset of attributes,and the number of clusters is randomly selected from the set

    3.2.2 Summarizing Multiple Clustering Results

    Having obtained the ensemble ∏,the corresponding base clustering results are summarized into an information matrixRAl∈[0,1]N×P,from which the final data partitionπ*can be created.Note thatPdenotes the total number clusters in the ensemble under examination.For each clusteringπg(shù)∈∏and their corresponding clusters {Cg1,...,Cgkg},a matrix entryRAl(xi,cl)represents the association degree that data objectxi∈Xhas with each clustercl∈{Cg1,...,Cgkg},which can calculated by the next equation.

    whereCg*(xi)is a cluster label to which samplexihas been assigned.In addition,sim(Cx,Cy)∈[0,1] denotes the similarity between any two clustersCx,Cy∈πg(shù),which can be discovered using the link-based algorithmlpresented next.

    Weighted Connected-Triple (WCT) Algorithm:has been developed to evaluate the similarity between any pair of clustersCx,Cy∈∏.At the outset,the ensemble ∏is represented as a weighted graphG=(V,W),whereVis the set of vertices each representing a cluster in ∏andWis a set of weighted edges between clusters.The weight |wxy|∈[0,1] assigned to the edgewxy∈WbetweenCx,Cy∈V,is estimated by the next equation.

    whereLz?Xdenotes the set of data objects belonging to clusterCz∈∏.Note thatGis an undirected graph such that |wxy| is equivalent to |wyx|,?Cx,Cy∈V.The WCT algorithm is summarized in Fig.2.Following that,the similarity between clustersCxandCycan be estimated by the next equation.

    whereWCTmaxis the maximumWCTx′y′value of any two clustersCx′,Cy′∈VandDC∈[0,1]is a constant decay factor (i.e.,confidence level of accepting two non-identical clusters as being similar).With this link-based similarity metric,sim(Cx,Cy)∈[0,1] withsim(Cx,Cx)=1,?Cx∈V.It is also reflexive such thatsim(Cx,Cy)=sim(Cy,Cx).

    Figure 2:The summarization of WCT algorithm

    Weighted Triple-Quality(WTQ)Algorithm:WTQ is inspired by the initial measure of [39],as it discriminates the quality of shared triples between a pair of vertices in question.Specifically,the quality of each vertex is determined by the rarity of links connecting itself to other vertices in a network.With a weighted graphG=(V,W),the WTQ measure of verticesvx,vy∈Vwith respect to each centre of a triplevz∈V,is estimated by

    provided that

    hereNz?Vdenotes the set of vertices that is directly linked to the vertexvz,such that ?vt∈Nz,wzt∈W.A pseudocode for the WTQ measure is described in Fig.3.Following that,the similarity between clustersCxandCycan be estimated by

    whereWTQmaxis the maximumWTQx′y′ value of any two clusters andDC∈[0,1] is a decay factor.

    3.2.3 Creating Final Data Partition

    Having acquiredRAl,the spectral graph-partitioning (SPEC) algorithm [40] is used to create the final data partition.This technique is first introduced by [28] as part of the Hybrid Bipartite Graph Formation (HBGF) framework.In particular,SPEC is exploited to divide a bipartite graph,which is transformed from the matrixBA∈{0,1}N×P(a crisp variation ofRAl),intoKclusters.Given this insight,HBGF can be considered as the baseline model of LCE.The process of generating the final data partitionπ*from thisRAlmatrix is summarized as follows.At first,a weighted bipartite graphG′=(V′,W′)is constructed from the matrixRAl,whereV′=VX∪VCis a set of vertices representing both data objectsVXand clustersVC,andW′denotes a set of weighted edges.The weight |w′ij|of edgew′ijconnecting verticesvi,vj∈V′,can be defined by

    Figure 3:The summarization of WTQ algorithm

    · |w′ij|=0Z whenvi,vj∈VXorvi,vj∈VC.

    · Otherwise,|w′ij|=RAl(vi,vj)whenvi∈VXandvj∈VC.Note thatG′is bi-directional such that |w′ij|=|w′ji|.In other words,W′∈[0,1](N+P)×(N+P)can also be specified as

    After that,theKlargest eigenvectorsu1,u2,...,uKofW′are used to produce the matrixU=[u1u2...uK],in which the eigenvectors are stacked in columns.Then,another matrixU*∈[0,1](N+P)×Kis formed by normalizing each row ofUto have a unit length.By considering each row ofU*asK-dimensional embedding of a graph vertex or a sample in [0,1]K,k-means is finally used to generate the final partitionπ*={C*1,...,C*K}ofKclusters.

    4 Performance Evaluation

    To obtain a rigorous assessment of LCE for mixed-type data clustering,this section presents the framework that is systematically designed and employed for the performance evaluation.

    4.1 Investigated Datasets

    Five benchmark datasets obtained from the UCI repository [41] are included in this investigation,with Tab.1 giving their details.Abaloneconsists of 4,177 instances,where eight physical measurements are used to divide these data into 28 age groups of abalone.There is only one categorical attribute,while the rest are continuous.Acute Inflammationswas originally created by a medical expert to assess the decision support system,which performs the presumptive diagnosis of two diseases of urinary system: acute inflammations of urinary bladder and acute nephritises [42].There are 120 instances,each representing a potential patient with six symptom attributes (1 numerical and 5 categorical).Heart Diseasecontains 303 records of patients collected from Cleveland Clinic Foundation.Each data record is described by 13 attributes (5 numerical and 8 nominal) regarding heart disease diagnosis.This dataset is divided into two classes referring to the presence and absence of heart disease in the examined patients.Horse Colichas 368 data records of injured horses,each of which is described by 27 attributes (7 numerical and 19 nominal).These collected instances are categorized into two classes: ‘Yes’indicating that lesion is surgical and ‘No’otherwise.About 30% of the original are missing values.For simplicity,missing nominal values in this dataset are equally treated as a new nominal value.In the case of missing numerical values,mean of the corresponding attribute is used.Mammographic Massescontains mammogram data of 961 patient records collected at the Institute of Radiology of the University Erlangen-Nuremberg between 2003 and 2006.Five attributes used to describe each record are BI-RADS assessment,age and three BI-RADS attributes.This dataset possesses two class labels referring to the severity of a mammographic mass lesion: benign (516 instances) and malignant(445 instances).

    Table 1:Description of datasets: number of data points (N),attributes (D) and number of classes(K)

    4.2 Experimental Design

    This experiment aims to examine the quality of the LCEWCTand LCEWTQextensions of LCE for clustering mixed numeric and nominal data.For these extended models where k-prototypes is used for creating a cluster ensemble,the parameterγof this base clustering algorithm is randomly selected from {0.1,0.2,...,5}.The results with LCE models are compared against a large number of standard clustering techniques and advanced cluster ensemble approaches.At first,this includes three standard clustering algorithms: k-prototypes,k-centers,k-means (KM)and dSqueezer.Particularly,the weight parameterγis randomly selected from {0.1,0.2,...,5}for each run of k-prototypes and k-centers.In order to exploit k-means,a mixed-type dataset needs to be pre-processed such that each nominal attribute is transformed toβnew binary-value features,whereβis the corresponding number of nominal values.For the case of dSqueezer,each numerical data attribute has to be mapped to the corresponding categorical domain using the discretisation method explained by [19].The set of compared methods also contains twelve different cluster ensemble techniques that have been reported in the literature for their effectiveness in combining clustering results: four graph-based methods of HBGF [28],CSPA [32],HGPA [32]and MCLA [32];two pairwise-similarity based methods [24] of EAC-SL and EAC-AL;and six feature-based methods of IVC [43],MM [33],QMI [33],AGGF[29],AGGLSF[29] and AGGLSR[29].The experiment setting employed in this evaluation is exhibited below.Note that the performance of standard clustering algorithms is always assessed over the original data,without using any information of cluster ensembles.

    · Cluster ensemble methods are investigated using four different ensemble types: Full-space+ Fixed-k,Full-space + Random-k,Subspace + Fixed-k,and Sub-space + Random-k.

    · Ensemble size (M) of 10 base clusterings is experimented.

    · As in [24,28,29],each method divides data points into a partition ofK(the number of true classes for each dataset) clusters,which is then evaluated against the corresponding true partition.Note that,true classes are known for all datasetsbut are not explicitly used by the cluster ensemble process.They are only used to evaluate the quality of the clustering results.

    · The quality of each cluster ensemble method with respect to a specific ensemble setting is generalized as the average of 50 runs.Based on the central limit theorem(CLT),the observed statistics in a controlled experiment can be justified to the normal distribution [43].

    · The constant decay factor (DC) of 0.9 is exploited with WCT and WTQ algorithms.

    4.3 Performance Measurements and Comparison

    Provided that the external class labels are available for all experimented datasets,the results of final clustering are evaluated using the validity index of Normalized Mutual Information(NMI) introduced by [32].Other quality measures such as Classification Accuracy (CA;[44]) and Adjusted Rand Index (AR;[45]) can be similarly used.However,unlike other criteria,NMIis not biased by a large number of clusters,thus providing a reliable conclusion.This also simplifies the magnitude of evaluation results and their comprehension.This quality index measures the average mutual information (i.e.,the degree of agreement) between two data partitions.One is obtained from a clustering algorithm (π*) while the other is taken from a priori information,i.e.,known class labels (∏′).WithNMI∈[0,1],the maximum value indicates that the clustering result and the original classes completely match.Given the two data partitions ofKclusters andK′classes,NMIis computed by the following equation.

    whereni,jis the number of data objects agreed by clusteriand classj,niis the number of data objects in clusteri,mjis the number of data objects in classjandNis the total number of data objects.To compare the performance of different cluster ensemble methods,the overall quality measure for a specific experiment setting (i.e.,dataset and ensemble type) is obtained as the average ofNMIvalues across 50 trials.These method-specific means may be used for the comparison purpose only to a certain extent.To achieve a more reliable assessment,the number of times (or frequencies) that one technique is ‘significantly better’and ‘significantly worse’(of 95% confidence level) than the others are considered here.This comparison method has been successfully exploited by [9] and [46] to discover trustworthy conclusions from the results generated by different cluster ensemble approaches.Based on these,it is useful to compare the frequencies of better (B) and worse (W) performance between methods.The overall measure (B-W) is also used as a summarization.

    4.4 Experimental Results

    Fig.4 shows the overall performance of different clustering methods,as the averageNMImeasure across all investigated datasets and ensemble types.Based on this,LCEWCTand LCEWTQare similarly more effective than their baseline model (i.e.,HBGF),whilst significantly improve the quality of data partitions acquired by base clusterings,i.e.,k-prototypes.Their performance levels are also better than other cluster ensemble methods and standard clustering algorithms included in this evaluation.Note that CSPA and k-means are the most accurate amongst the aforementioned two groups of compared methods.In addition,featurebased approaches such as QMI and IVC are unfortunately incapable of enhancing the accuracy of base clustering results.Dataset-specific results are given in Tabs.A to E ofSupplementary(https://drive.google.com/file/d/1I62X5LTDQ_u6feFx57tW9oqwDLtfu4eH/view?usp=sharing).

    Figure 4:performance of different clustering methods,averaged across five datasets and four ensemble types.Note that each error bar represents the standard deviation of the corresponding average

    To further evaluate the quality of identified techniques,the number of times (or frequency)that one method is significantly better and worse (of 95% confidence level) than the others are assessed across all experimented datasets and ensemble types.Tabs.2 and 3 present for each method the frequencies of significant better (B) and significant worse (W) performance,respectively.According to the frequencies shown in Tab.2,LCEWCTand LCEWTQperform equally well on most of the examined datasets.EAC-AL is exceptionally effective on ‘Abalone’data,while the three graph-based approaches of CSPA,HGPA and MCLA are of good quality with‘Heart Disease’and ‘Horse Colic’.Note that k-means and k-prototypes are the best amongst basic clustering techniques.It is also interesting to see that the better-performance statistics of feature-based approaches are usually lower than those of standard clusterings considered here.These findings can be similarly observed in Tab.3,which illustrates the frequencies of worse performance (W).In this specific evaluation context,k-means is notably effective for most datasets and outperforms many graph-based and pairwise-similarity based cluster ensemble methods.

    Besides,the relations between performance of experimented cluster ensemble methods with respect to different ensemble types are also examined for this experiment: Full-space + Fixed-k,Full-space + Random-k,Subspace + Fixed-k,and Subspace + Random-k.Specifically,Fig.5 shows the averageNMImeasures of different approaches across datasets.According to this statistical illustration,LCEWCTand LCEWTQare more effective than other techniques across different ensemble types,with their best performance being obtained with ‘Subspace + Fixed-k’.HBGF and three graph-based approaches (CSPA,HGPA and MCLA) are also more effective on Subspace ensemble types,as compared to the Full-space alternatives.While both ‘Fixed-k’and‘Random-k’strategies equally lead to good performance of link-based techniques,feature-based and pair-wise similarity based methods perform better using the latter.

    Table 2:Number of times that one method performs significantly better than others,summarized across five datasets and four types of ensemble.The best two per dataset are highlighted in boldface

    Table 3:Number of times that one method performs significantly worse than others,summarized across five datasets and four types of ensemble.The best two per dataset are highlighted in boldface

    Table 3:Continued

    Figure 5:Performance of clustering methods,categorized by four ensemble types

    The quality of LCEWCTand LCEWTQwith respect to the perturbation ofDCandMparameters is also studied for the clustering of mixed-type data.Fig.6 presents the relation between different values ofDC∈{0.1,...,0.9} and the quality of data partitions generated by both LCE methods-the averageNMImeasure across all ensemble types,whereMis fixed to 10 for comparison simplicity.In general,the performance of LCEWCTand LCEWTQgradually improve as the value ofDCincreases.Another parameter to be assessed is the ensemble size (M).Fig.7 shows the association between the performance of various techniques and different values ofM∈{10,20,...,100}.Both LCE methods perform consistently better than their baseline model competitors across different ensemble sizes,where the decay factor (DC) is fixed to 0.9 for simplicity.Their performance levels also incline with the increasing ensemble size.

    Figure 6:Relations between DC ∈{0.1,0.2,...,0.9} and performance of LCE methods (averages of NMI over four ensemble types for each dataset).Measure of HBGF is also included for a comparison

    Figure 7:Relations between M ∈{10,20,...,100} and performance of LCE methods (presented as the averages of NMI over four ensemble types for each dataset)

    5 Conclusion

    This paper has presented the novel extension of link-based consensus clustering to mixed-type data analysis.The resulting models have been rigorously evaluated on benchmark datasets,using several ensemble types.The comparison results against different standard clustering algorithms and a large set of well-known cluster ensemble methods show that the link-based techniques usually provide solutions of higher quality than those obtained by competitors.Furthermore,the investigation of their behavior with respect to the perturbation of algorithmic parameters also suggests the robust performance.Such a characteristic makes link-based cluster ensembles highly useful for the exploration and analysis of a new set of mixed-type data,where prior knowledge is minimal.Because of its scope,there are many possibilities for extending the current research.Firstly,other link-based similarity measures may be explored.As more information within a link network is exploited,link-based cluster ensembles are likely to be more accurate(see the relevant findings in the initial work [30,31],where the use of SimRank and its variants is examined).However,it is important to note that such modification is more resource intensive and less accurate in a noisy environment than the present setting.Secondly,performance of linkbased cluster ensembles may be further improved using an adaptive decay factor (DC),which is determined from the dataset under examination.

    The diversity of cluster ensembles has a positive effect on the performance of the link-based approach.It is interesting to observe the behavior of the proposed models to new ensemble generation strategies,e.g.,the random forest method for clustering [47],which may impose a higher diversity amongst base clusterings.Another non-trivial topic is related to the determination of ensemble components’significance.This discrimination or selection process usually leads to a better outcome.The coupling of such a mechanism with the link-based cluster ensembles is to be further studied.Despite its performance,the consensus function of spectral graph partitioning(SPEC) can be inefficient with a large RA matrix.This can be overcome through the approximation of eigenvectors required by SPEC.As a result,the time complexity becomes linear to the matrix size,but with possible information loss.A better alternative has been introduced by [48]via the notion of Power Iteration Clustering (PIC).It does not actually find eigenvectors but discovers interesting instances of their combinations.As a result,it is very fast and has proven more effective than the conventional SPEC.The application of PIC as a consensus function of link-based cluster ensembles is a crucial step towards making the proposed approach truly effective in terms of run-time and quality.Other possible future works include the use of proposed method to support accurate clusterings for fuzzy reasoning [49],handling of data with missing values [50]and data discretization [51].

    Acknowledgement: This research work is partly supported by Mae Fah Luang University and Newton Institutional Links 2020-21 project (British Council and National Research Council of Thailand).

    Funding Statement: This work is funded by Newton Institutional Links 2020-21 project:623718881,jointly by British Council and National Research Council of Thailand (www.british council.org).The first author is the project PI with the other participating as a Co-I.

    Conflicts of Interest: There is no conflict of interest to report regarding the present study.

    亚洲av成人精品一区久久| 亚洲成人中文字幕在线播放| 成年女人看的毛片在线观看| 麻豆av噜噜一区二区三区| 香蕉av资源在线| 欧美高清成人免费视频www| 男插女下体视频免费在线播放| 亚洲专区中文字幕在线| 日日夜夜操网爽| 国产精品精品国产色婷婷| 五月伊人婷婷丁香| 国产精品女同一区二区软件 | 校园人妻丝袜中文字幕| 老司机深夜福利视频在线观看| 夜夜爽天天搞| 亚洲国产高清在线一区二区三| 亚洲av中文字字幕乱码综合| 最新在线观看一区二区三区| 免费搜索国产男女视频| 免费看a级黄色片| 九色国产91popny在线| 成年人黄色毛片网站| 精品久久久久久久末码| 嫩草影院新地址| 国内精品美女久久久久久| 久久久久久久久久成人| 国产探花在线观看一区二区| 丰满的人妻完整版| 51国产日韩欧美| 久久久久久九九精品二区国产| 国产精品电影一区二区三区| 亚洲第一电影网av| 亚洲人与动物交配视频| 久久6这里有精品| 村上凉子中文字幕在线| 变态另类成人亚洲欧美熟女| 国产欧美日韩精品一区二区| 在现免费观看毛片| 毛片一级片免费看久久久久 | 久久久久久国产a免费观看| 国产精品女同一区二区软件 | 少妇高潮的动态图| 日本精品一区二区三区蜜桃| 在线免费观看的www视频| 国产69精品久久久久777片| 美女黄网站色视频| 美女xxoo啪啪120秒动态图| 亚洲第一区二区三区不卡| 免费av不卡在线播放| 亚洲av成人精品一区久久| 夜夜爽天天搞| 香蕉av资源在线| 1000部很黄的大片| 日本欧美国产在线视频| 国产亚洲91精品色在线| 精品久久久久久久久久久久久| 国产精品一区二区三区四区久久| 国产精品综合久久久久久久免费| 久久人妻av系列| 精品久久久久久,| 午夜日韩欧美国产| 性插视频无遮挡在线免费观看| 国产精品久久久久久久电影| 国产人妻一区二区三区在| 深夜精品福利| 成人特级黄色片久久久久久久| 日本一本二区三区精品| 综合色av麻豆| 日韩中文字幕欧美一区二区| 99精品在免费线老司机午夜| 桃色一区二区三区在线观看| 俄罗斯特黄特色一大片| 亚洲性夜色夜夜综合| 久久精品国产亚洲网站| 亚洲精品乱码久久久v下载方式| 国国产精品蜜臀av免费| 2021天堂中文幕一二区在线观| 欧美日韩国产亚洲二区| 成熟少妇高潮喷水视频| 国产精品日韩av在线免费观看| 又粗又爽又猛毛片免费看| 成熟少妇高潮喷水视频| 高清毛片免费观看视频网站| 热99在线观看视频| 国产精品久久电影中文字幕| 在线观看舔阴道视频| 久久久精品欧美日韩精品| 欧美日韩亚洲国产一区二区在线观看| 日韩国内少妇激情av| 99热网站在线观看| avwww免费| 国产精品三级大全| 在线看三级毛片| 亚洲人成网站在线播放欧美日韩| 欧美+日韩+精品| 日韩精品中文字幕看吧| 内射极品少妇av片p| 看黄色毛片网站| 成人亚洲精品av一区二区| 久久久久久久久久久丰满 | 欧美日韩乱码在线| 欧美性猛交黑人性爽| 欧美性猛交黑人性爽| a级毛片免费高清观看在线播放| 亚洲熟妇中文字幕五十中出| 欧美一区二区国产精品久久精品| 美女高潮喷水抽搐中文字幕| 国国产精品蜜臀av免费| 亚洲最大成人av| 欧美一区二区国产精品久久精品| 国产精品精品国产色婷婷| 搡老妇女老女人老熟妇| 色播亚洲综合网| 偷拍熟女少妇极品色| 一级毛片久久久久久久久女| 亚洲欧美日韩高清专用| 国国产精品蜜臀av免费| xxxwww97欧美| 最新中文字幕久久久久| 日韩精品中文字幕看吧| 成人精品一区二区免费| 午夜激情欧美在线| 国产高清视频在线观看网站| 婷婷精品国产亚洲av| 特级一级黄色大片| 国产高清视频在线播放一区| 国产真实伦视频高清在线观看 | 看十八女毛片水多多多| 真人做人爱边吃奶动态| 亚洲国产日韩欧美精品在线观看| 伦精品一区二区三区| 一区二区三区高清视频在线| 动漫黄色视频在线观看| 免费无遮挡裸体视频| 成人亚洲精品av一区二区| 老司机午夜福利在线观看视频| 老司机午夜福利在线观看视频| 成年女人毛片免费观看观看9| 久9热在线精品视频| 成人午夜高清在线视频| 韩国av一区二区三区四区| 欧美丝袜亚洲另类 | 能在线免费观看的黄片| 色综合婷婷激情| 波多野结衣高清无吗| 十八禁网站免费在线| 嫁个100分男人电影在线观看| 色av中文字幕| 国产激情偷乱视频一区二区| 最近在线观看免费完整版| 久久精品国产鲁丝片午夜精品 | 哪里可以看免费的av片| 国产精品一及| 国产成人影院久久av| 女的被弄到高潮叫床怎么办 | 日韩欧美精品v在线| 欧美性猛交黑人性爽| bbb黄色大片| 啦啦啦韩国在线观看视频| 身体一侧抽搐| 国产精品三级大全| 亚洲成a人片在线一区二区| 波多野结衣巨乳人妻| 22中文网久久字幕| 成年女人永久免费观看视频| 亚洲成a人片在线一区二区| 在线观看舔阴道视频| 自拍偷自拍亚洲精品老妇| 亚洲无线在线观看| 啦啦啦啦在线视频资源| 久久久久国产精品人妻aⅴ院| eeuss影院久久| 91在线精品国自产拍蜜月| 国产单亲对白刺激| 国产乱人伦免费视频| 一区福利在线观看| 午夜精品在线福利| 看片在线看免费视频| 午夜福利高清视频| 极品教师在线免费播放| 一卡2卡三卡四卡精品乱码亚洲| 色综合色国产| 2021天堂中文幕一二区在线观| 极品教师在线免费播放| 蜜桃久久精品国产亚洲av| 网址你懂的国产日韩在线| 国产蜜桃级精品一区二区三区| 国模一区二区三区四区视频| www日本黄色视频网| 18禁在线播放成人免费| 欧洲精品卡2卡3卡4卡5卡区| 变态另类成人亚洲欧美熟女| 国产成年人精品一区二区| 久久久久国内视频| 免费一级毛片在线播放高清视频| 午夜免费成人在线视频| 国产精品久久久久久av不卡| 久久久久久久亚洲中文字幕| 亚洲国产高清在线一区二区三| 国产主播在线观看一区二区| 亚洲av免费高清在线观看| 欧美最黄视频在线播放免费| av中文乱码字幕在线| 久久欧美精品欧美久久欧美| 99热只有精品国产| 欧美日韩乱码在线| 精品欧美国产一区二区三| 色视频www国产| 精品久久久久久久久久久久久| 亚洲三级黄色毛片| 亚洲经典国产精华液单| 成人国产一区最新在线观看| 欧美黑人欧美精品刺激| 男女之事视频高清在线观看| 久久精品国产亚洲网站| av黄色大香蕉| 亚洲欧美日韩东京热| 12—13女人毛片做爰片一| 一级a爱片免费观看的视频| 免费高清视频大片| a在线观看视频网站| 精品无人区乱码1区二区| 久久精品国产亚洲av天美| 亚洲精华国产精华液的使用体验 | 少妇裸体淫交视频免费看高清| 99久久精品一区二区三区| 国产大屁股一区二区在线视频| bbb黄色大片| 又黄又爽又刺激的免费视频.| 国产精品亚洲美女久久久| 日本-黄色视频高清免费观看| 免费av观看视频| 亚洲性久久影院| 国产成人一区二区在线| 大型黄色视频在线免费观看| 少妇丰满av| 国产 一区精品| 亚洲熟妇中文字幕五十中出| 丰满乱子伦码专区| 搡老妇女老女人老熟妇| 久久国内精品自在自线图片| 美女 人体艺术 gogo| 校园春色视频在线观看| 亚洲第一区二区三区不卡| 亚州av有码| 一本一本综合久久| 国产精品亚洲美女久久久| 国产免费av片在线观看野外av| 日本欧美国产在线视频| 欧美+日韩+精品| 免费看a级黄色片| 国产精品三级大全| 色在线成人网| 黄色日韩在线| 亚洲无线观看免费| 婷婷精品国产亚洲av| 我的老师免费观看完整版| 亚洲真实伦在线观看| 亚洲自偷自拍三级| 欧美成人性av电影在线观看| 精品人妻熟女av久视频| 看片在线看免费视频| 国产精品无大码| 琪琪午夜伦伦电影理论片6080| 国内精品美女久久久久久| 在线天堂最新版资源| 亚洲av第一区精品v没综合| 精品一区二区三区av网在线观看| 一级a爱片免费观看的视频| 欧美xxxx性猛交bbbb| 又紧又爽又黄一区二区| 亚洲欧美日韩卡通动漫| 成人永久免费在线观看视频| 嫩草影院精品99| 亚洲人成网站高清观看| 少妇丰满av| 亚洲精品影视一区二区三区av| 老女人水多毛片| 亚洲精品一区av在线观看| 99在线视频只有这里精品首页| 欧美最新免费一区二区三区| 亚洲综合色惰| 日韩 亚洲 欧美在线| 免费av不卡在线播放| 国产精品国产三级国产av玫瑰| 国产淫片久久久久久久久| .国产精品久久| 嫩草影院新地址| 国产欧美日韩精品一区二区| 美女高潮喷水抽搐中文字幕| 国产色爽女视频免费观看| 久久亚洲精品不卡| 99热这里只有精品一区| 少妇被粗大猛烈的视频| 久久精品国产亚洲网站| 少妇熟女aⅴ在线视频| 我的老师免费观看完整版| 亚洲人成伊人成综合网2020| 内地一区二区视频在线| 欧美激情国产日韩精品一区| 国产一区二区激情短视频| 久久久色成人| av在线亚洲专区| 国产高清激情床上av| a在线观看视频网站| 美女被艹到高潮喷水动态| 亚洲成人精品中文字幕电影| 欧美性感艳星| 午夜视频国产福利| 国产一区二区激情短视频| 亚洲av成人精品一区久久| 日韩欧美一区二区三区在线观看| 亚洲avbb在线观看| 精华霜和精华液先用哪个| 韩国av在线不卡| 欧美一区二区精品小视频在线| 男人的好看免费观看在线视频| 国产黄片美女视频| 中亚洲国语对白在线视频| 亚洲欧美日韩东京热| 偷拍熟女少妇极品色| 在线国产一区二区在线| 精品久久久久久久久av| 可以在线观看毛片的网站| 欧美黑人欧美精品刺激| 黄色欧美视频在线观看| 国产av不卡久久| 十八禁国产超污无遮挡网站| 美女高潮喷水抽搐中文字幕| 欧美一级a爱片免费观看看| 一进一出好大好爽视频| 九九爱精品视频在线观看| 日韩中字成人| 男人狂女人下面高潮的视频| 亚洲成人中文字幕在线播放| 男人狂女人下面高潮的视频| 精华霜和精华液先用哪个| 国产69精品久久久久777片| 精品人妻1区二区| 午夜激情欧美在线| 亚洲精品一区av在线观看| x7x7x7水蜜桃| av中文乱码字幕在线| 美女 人体艺术 gogo| 亚洲 国产 在线| 精品人妻熟女av久视频| 22中文网久久字幕| 在线免费观看不下载黄p国产 | 日本免费a在线| 国产亚洲精品综合一区在线观看| 国产精品精品国产色婷婷| 国产精品人妻久久久久久| 国产成人福利小说| 成人欧美大片| 国国产精品蜜臀av免费| 亚洲午夜理论影院| 舔av片在线| 看片在线看免费视频| 麻豆国产97在线/欧美| 国内精品久久久久精免费| 免费电影在线观看免费观看| 久久人妻av系列| 久久精品久久久久久噜噜老黄 | 欧美精品国产亚洲| 中文字幕久久专区| 九色成人免费人妻av| 日韩大尺度精品在线看网址| 97超视频在线观看视频| 久久精品国产亚洲av天美| 亚洲专区中文字幕在线| 免费观看人在逋| 久久精品国产亚洲av香蕉五月| 男人和女人高潮做爰伦理| 又黄又爽又免费观看的视频| 噜噜噜噜噜久久久久久91| 亚洲图色成人| 人人妻人人澡欧美一区二区| 欧美成人性av电影在线观看| 免费一级毛片在线播放高清视频| 国产蜜桃级精品一区二区三区| 免费看av在线观看网站| av天堂中文字幕网| 嫩草影院精品99| 午夜精品一区二区三区免费看| 97人妻精品一区二区三区麻豆| 国产美女午夜福利| 国产高清不卡午夜福利| 亚洲aⅴ乱码一区二区在线播放| 免费av观看视频| 99热网站在线观看| 成年免费大片在线观看| 国内精品美女久久久久久| av视频在线观看入口| 少妇的逼水好多| 色吧在线观看| 欧美另类亚洲清纯唯美| 永久网站在线| 91久久精品国产一区二区成人| 波多野结衣巨乳人妻| 亚洲av免费在线观看| 一进一出抽搐动态| 欧美绝顶高潮抽搐喷水| www.色视频.com| 桃色一区二区三区在线观看| 亚洲中文字幕一区二区三区有码在线看| 校园人妻丝袜中文字幕| 精品久久久久久成人av| 成人永久免费在线观看视频| 少妇丰满av| 三级毛片av免费| 国产亚洲精品久久久久久毛片| 亚洲性夜色夜夜综合| 最近最新中文字幕大全电影3| 黄片wwwwww| 在线看三级毛片| 日本黄色视频三级网站网址| 69av精品久久久久久| 日韩国内少妇激情av| 岛国在线免费视频观看| 熟女电影av网| 18禁黄网站禁片午夜丰满| 日韩欧美国产一区二区入口| 如何舔出高潮| 免费电影在线观看免费观看| 日日摸夜夜添夜夜添小说| 亚洲狠狠婷婷综合久久图片| 日韩国内少妇激情av| 男女视频在线观看网站免费| 日本黄大片高清| 51国产日韩欧美| av中文乱码字幕在线| 女的被弄到高潮叫床怎么办 | aaaaa片日本免费| 亚洲成人精品中文字幕电影| 别揉我奶头 嗯啊视频| 在线免费观看的www视频| 天堂网av新在线| 亚洲精品国产成人久久av| 亚洲,欧美,日韩| 国产精品精品国产色婷婷| 日韩,欧美,国产一区二区三区 | 国产极品精品免费视频能看的| av女优亚洲男人天堂| 国产av在哪里看| 欧美日韩亚洲国产一区二区在线观看| 国产精品国产三级国产av玫瑰| 美女被艹到高潮喷水动态| 精品久久久久久成人av| 国产不卡一卡二| 99久久成人亚洲精品观看| 男人和女人高潮做爰伦理| 欧美三级亚洲精品| 国产欧美日韩一区二区精品| 日本与韩国留学比较| 91麻豆av在线| 国产精品一及| 免费在线观看日本一区| 深爱激情五月婷婷| 网址你懂的国产日韩在线| 亚洲精品影视一区二区三区av| 午夜福利在线观看免费完整高清在 | 免费av观看视频| 欧美成人性av电影在线观看| 在线a可以看的网站| 嫁个100分男人电影在线观看| ponron亚洲| 国产69精品久久久久777片| 亚洲美女视频黄频| 国内精品久久久久精免费| 成人特级av手机在线观看| 国产探花极品一区二区| 最后的刺客免费高清国语| 一区二区三区高清视频在线| 日本黄色片子视频| 久久精品国产清高在天天线| 看黄色毛片网站| av视频在线观看入口| 亚洲国产高清在线一区二区三| 51国产日韩欧美| 黄片wwwwww| 精品一区二区三区av网在线观看| 18禁在线播放成人免费| 麻豆成人av在线观看| 久久久国产成人免费| 日本五十路高清| 亚洲精品乱码久久久v下载方式| 欧美精品国产亚洲| 两性午夜刺激爽爽歪歪视频在线观看| 欧美xxxx黑人xx丫x性爽| 国产高清视频在线播放一区| 成人二区视频| 国产精品国产三级国产av玫瑰| 中文字幕高清在线视频| 啪啪无遮挡十八禁网站| 国产精品一区二区性色av| 久久精品国产亚洲网站| 精品一区二区免费观看| 久久久久久久久中文| 一区二区三区四区激情视频 | 国产欧美日韩精品一区二区| 国产精品,欧美在线| 色吧在线观看| 亚洲图色成人| 久久精品国产亚洲av香蕉五月| 老司机福利观看| 日本与韩国留学比较| 国产亚洲欧美98| 国产三级在线视频| 色视频www国产| 一a级毛片在线观看| 日本在线视频免费播放| www日本黄色视频网| 啦啦啦啦在线视频资源| 一进一出抽搐gif免费好疼| 欧美成人a在线观看| 少妇高潮的动态图| 免费大片18禁| netflix在线观看网站| 国产av不卡久久| 精品人妻一区二区三区麻豆 | 非洲黑人性xxxx精品又粗又长| 一个人看视频在线观看www免费| 亚洲18禁久久av| 九九爱精品视频在线观看| 久久婷婷人人爽人人干人人爱| 久久草成人影院| 欧美日韩精品成人综合77777| 欧美极品一区二区三区四区| 国产一区二区在线观看日韩| 欧美另类亚洲清纯唯美| 99久久精品热视频| 国产大屁股一区二区在线视频| 波多野结衣高清作品| 国产精品野战在线观看| 麻豆国产97在线/欧美| av专区在线播放| 成人午夜高清在线视频| 日本免费一区二区三区高清不卡| 亚洲不卡免费看| 久久人人爽人人爽人人片va| 免费看光身美女| 麻豆国产av国片精品| 少妇被粗大猛烈的视频| 国产av麻豆久久久久久久| 夜夜爽天天搞| 久久久午夜欧美精品| 欧美国产日韩亚洲一区| 九九热线精品视视频播放| 在线a可以看的网站| 麻豆精品久久久久久蜜桃| 久久中文看片网| 搞女人的毛片| 亚洲av免费高清在线观看| 亚洲精品成人久久久久久| 亚洲在线观看片| 久久6这里有精品| 日韩人妻高清精品专区| 国内久久婷婷六月综合欲色啪| 国产精品日韩av在线免费观看| 波野结衣二区三区在线| 在线看三级毛片| 亚洲av美国av| 国内精品一区二区在线观看| 日韩 亚洲 欧美在线| 丰满的人妻完整版| 免费人成在线观看视频色| 成人特级av手机在线观看| 又黄又爽又免费观看的视频| 18禁黄网站禁片午夜丰满| 色综合亚洲欧美另类图片| 色5月婷婷丁香| 禁无遮挡网站| 成人三级黄色视频| 亚洲欧美日韩高清在线视频| 久久6这里有精品| 又粗又爽又猛毛片免费看| 一本精品99久久精品77| 99九九线精品视频在线观看视频| 九九在线视频观看精品| 色视频www国产| 长腿黑丝高跟| 久久久久国产精品人妻aⅴ院| 欧美xxxx性猛交bbbb| 99久久中文字幕三级久久日本| 欧美+日韩+精品| 成人性生交大片免费视频hd| 男女做爰动态图高潮gif福利片| 韩国av一区二区三区四区| 在现免费观看毛片| 久久亚洲精品不卡| 搡老熟女国产l中国老女人| 最近在线观看免费完整版| 婷婷六月久久综合丁香| 亚洲午夜理论影院| 男人和女人高潮做爰伦理| 精品国产三级普通话版| 搡老熟女国产l中国老女人| 国产精品嫩草影院av在线观看 | 亚洲人成伊人成综合网2020| 亚洲成人中文字幕在线播放| 赤兔流量卡办理| 男插女下体视频免费在线播放| 日韩中字成人| 日韩在线高清观看一区二区三区 | 网址你懂的国产日韩在线| 亚洲成a人片在线一区二区| 久久精品影院6| 变态另类成人亚洲欧美熟女| 国产单亲对白刺激| 欧美日韩中文字幕国产精品一区二区三区| 熟女电影av网| 精品99又大又爽又粗少妇毛片 | 欧美区成人在线视频| 少妇人妻精品综合一区二区 | 亚洲无线观看免费|