• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Sparse Reconstructive Evidential Clustering for Multi-View Data

    2024-03-01 10:59:00ChaoyuGongandYangYou
    IEEE/CAA Journal of Automatica Sinica 2024年2期

    Chaoyu Gong and Yang You

    Abstract—Although many multi-view clustering (MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters.In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects,which are often located in highly-overlapping areas of multi-view feature space.The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation.To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm (SRMVEC).Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional humanreadable chart by calculating 2 newly defined mathematical metrics for each object.From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers.Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering.Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory.Besides,SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.

    I.INTRODUCTION

    MULTI-view clustering (MVC) aims to categorizenmulti-view objects into several clusters so that objects in the same cluster are more similar than those from different ones [1], [2].Objects are described by several views of data in an MVC problem, e.g., documents originally written in English as one view and their translations to French, German,Spanish and Italian as 4 other views, and MVC algorithms often provide improved performance compared to single-view clustering algorithms [3], [4].Therefore, MVC has been successfully applied to various applications including computer vision [5], [6], social multimedia [7] and so on.Based on different philosophies, many MVC methods have been proposed and can be classified into several categories [4], such as generative methods [8], [9], subspace-clustering-based methods[10], deep-learning-based methods (e.g., a novel deep sparse regularizer learning model that learns data-driven sparse regularizers is proposed in [11] to cluster multi-view data), spectral-clustering-based methods [12], [13], the graph-learningbased methods [14], [15] and so on.Broadly speaking, the above methods are concerned with solving the MVC problem in a way that improves either efficiency or clustering performance, assuming that the correct number of clusters is known.

    Motivations: However,estimating the number of clusters for an MVC problemcould be substantially more challenging and urgent than grouping the objects [21].Many state-of-theart MVC algorithms suffer from performance degradation without setting the correct cluster number, which can be illustrated quantitatively through the following example.We run 5 MVC algorithms on the widely used YALE three-view dataset[22] consisting of 15 clusters.The results in terms of normalized mutual information (NMI) [23] are shown in Fig.1.As can be seen, all the counted algorithms have lower NMI values when they are not fed with the correct cluster number,compared to the results shown in the dotted circle.To date,there are several techniques based on specific criteria to estimate the number of clusters, such as KL statistic [24], elbow[25], and gap statistics [26].Nevertheless, these used criteria are served for single-view data.Extending these techniques to MVC problems is not feasible, although multiple views can be concatenated into a single high-dimensional vector.The first reason is that such concatenation results in very high-dimensional feature vectors.Without an appropriate metric learning process, the calculation of distance between these high-dimensional vectors is definitely affected by thecurse of dimensionality[27], significantly reducing the effectiveness of cluster number estimation.Another reason is that the estimation result may be biased toward a view (or views) that yields a dominantly large number of features.

    Moreover, each cluster has one most representative object(known as the cluster center), as discussed in [28].These cluster centers provide a variety of useful information in addition to directly guiding the clustering process.For instance, the power supply strategy for clients within the same cluster is frequently designed based on the consumption habits of one typical customer (cluster center) in the electrical consumption clustering analysis [29].In this case, an overall solution to the issues affecting all objects in the same cluster may be found through the analysis of a cluster center [30].Hence, the second motivationto identify the cluster center in each cluster of multi-view datais prompted.Besides, the hard/fuzzy partition that conventional MVC algorithms generate is not fine enough, especially for multi-view objects in heavily overlapping regions.To enhance fault tolerance and clustering performance, it is also a motivation touse a more fine-grained partition to cluster multi-view data.

    Fig.1.Average NMI values of five MVC algorithms with different numbers of clusters.The considered algorithms including AE2Net [16], MLRSSC[17], MvWECM [18], SDMVC [19] and DeepNMF [20] are run 5 times.

    Technical Issues: As shown in [17], the common technical challenge in MVC is to learn the affinity matrix, which is usually learned through the self-expression procedure [31], i.e.,

    under the constraints d iag(W(s))=0, ?s∈{1,...,p}, wherepis the number of views, X(s)∈RDs×nis the data matrix ofsth view,nis the number of objects,Dsis the number of dimensions ofsth view,αandβare two regularization parameters,and W(s)∈Rn×nis the affinity matrix ofsth view to be learned.After learning the affinity matrix W(s)for all views,the affinity matrix W?of the entire multi-view data is calculated as the element-wise average [17], [32].The similarity between intra-view dimensions can not be held consistent before and after self-expression, because these self-expression techniques only concentrate on maintaining comparable objects with similar self-expression outcomes and disregard relevant information between various dimensions.Besides,calculating W?as the element-wise average of all W(s)is too arbitrary to allow all views to share a consistent affinity matrix.Thus, the first technological challenge ishow to create a new self-expression objective function that can preserve the relationships between intra-view dimensions and allow the inter-view data to share the same affinity matrix.The second technical challenge ishow to theoretically design a mapping that visualizes the properties of each multi-view object in order to discover the cluster centers after learning the affinity matrix.

    The credal partition is a novel way of partition proposed by authors of [33], [34], where the memberships of objects in clusters are defined using mass functions [35] in the framework of evidence theory [36].Its formalization allows for a detailed description of the ambiguity and uncertainty in clustering membership, making the credal partition particularly appropriate for grouping those multi-view objects in overlapping regions.Nevertheless, as almost all of the current methods are created for single-view data, there is still a significant gap between credal partition methods and MVC [28].The third technical challenge is now brought up, i.e.,how to structure the learning of a credal partition that can be nested with the affinity matrix acquired from the multi-view self-expression.

    Contribution: According to the above discussion, a novel MVC algorithm, named sparse reconstructive multi-view evidential clustering (SRMVEC), is proposed to simultaneously solve the 3 problems rarely addressed in other MVC research,namely, estimating the cluster number, identifying the cluster centers and deriving the fine-grained credal partition.The contributions of this paper are three-fold:

    1) Differently from problem (1), we formulate a new objective function associated with the related information between intra-view dimensions, the view weights and the reconstruction error, enabling multiple views to learn a consistent affinity matrix directly;

    2) Based on the learned affinity matrix, the multi-view objects are mapped to a 2-dimensional chart that can be read by humans using two mathematical metrics that we define for each multi-view object.Users can easily obtain the cluster numbers from this chart and recognize the objects that can be chosen as the cluster centers;

    3) The derivation of a credal partition is reformulated with the help of the discovered cluster centers as an optimization problem integrated with the learned affinity matrix, completely reflecting the (dis)similarity between any two objects and enhancing SRMVEC performance.

    II.RELATED WORK

    Multi-View Clustering: Almost all the existing MVC algorithms are designed to improve clustering performance or efficiency after prespecifying the cluster number.Existing MVC algorithms can be generally divided into the following families [4].The first one is generative algorithms to learn the generative models generating the data from clusters, such as the multi-view CMM [37] and its weighted version [9].Another research line called discriminative algorithms proposes to optimize the objective function to seek the clustering result directly.As shown in a recent survey [4], most of the MVC algorithms are discriminative.They can be further divided into several groups, including the subspace-clusteringbased methods (e.g., PGSC [10] and CDMSC2[38]), the spectral-clustering-based methods (e.g., SMSC [12], SMSCNN[13] and CGL [39]), the NMF-based methods (e.g., MCLES[40]), the kernel-clustering-based methods (e.g., MKCSS[41]) and so on.Recently, more researchers increase attention to using deep learning (e.g., a differentiable network-based method named differentiable bi-sparse multi-view co-clustering [42] and a novel differentiable bi-level optimization network for multi-view clustering [43]) or graph learning methods (e.g., clustering multi-view data based on the contrastive consensus graph learned by a convolutional network [1]) in the MVC problem, and improving the scalability and efficiency of MVC algorithms, such as OPLFMVC [44].

    Estimating the Cluster Number:The single-view clustering problem is the main focus of almost all estimation techniques.The literature has a wide variety of estimating techniques (see, for instance, [45], [46]).These techniques have the same fundamental idea.They first establish a basic clustering method and a clustering criterion, continue experimenting with various cluster numbers, and then choose the cluster numberuthat best matches the ideal clustering criterion of the target dataset.The GAP statistic [26], KL statistic [24],DUNN metric [47], etc., are some of the often utilized criteria.However, the criteria that are employed are often established using a metric between single-view objects.

    Multi-View Learning Methods Based on Evidence Theory:

    To the best of our knowledge, there are few evidence-theory-based methods on multi-view data focusing on finding the cluster number in the MVC problem.Evidence theory has been used in multi-view learning scenarios to improve the performance of algorithms [48].The author of [49] proposes to use evidence theory to classify the multi-view medical images and manage the partial volume effect.In [50], an architecture based on a generalized evidence processing for data fusion is presented.To fuse multi-view information, an architecture based on a weighted fuzzy evidence theory is proposed to assign evidence obtained from various classification methods [51].Differently from the above methods using evidence theory to combine information, recent papers focus more on formulating uncertainty in the learning process.In the problem of classifying multi-view samples, authors of [52]use evidence theory to model the uncertainty of the Dirichlet distribution obtained from a certain view, aiming to flexibly integrate multiple Dirichlet distributions.In the classification process, the mass values correspond to the probabilities of different classes and the overall uncertainty is also modeled.Dempster’s rule [36] is used to combine the information from each view to boost the classification performance.In [53],researchers tackle the cross-modal retrieval problem by assigning a belief mass [36], [54] to each query and an overall uncertainty mass based on the collected cross-modal.In[18], the authors propose an evidential c-means multi-view algorithm but ignore the detection of cluster number.

    III.PRELIMINARIES

    Evidence Theory: Let us consider a variableωtaking values in a finite set called theframe of discernmentΩ={ω1,ω2,...,ωu}.A mass functionm(also called a piece of evid∑ence) is defined as a mapping from 2Ωto [0, 1] such thatwhere 2Ωis the power set of Ω.The subsets A satisfyingmΩ(A)>0 are called thefocal setsofm.The value ofmΩ(A) represents a share of a unit mass allocated to focal set A, and which cannot be allocated to any strict subset of A.In particular, the vacuous mass function such thatmΩ(Ω)=1 corresponds to total ignorance about the value ofω.In this case, the Ω in the brackets is a focal set calledignorance focal set.

    Another equivalent representation of a givenmΩ, namedplausibility function, is defined as

    for all A ?Ω, where B also denotes the focal sets.Assumethat there are two mass functionsmΩ1andmΩ2on the same frame of discernment Ω.Dempster’s rule[36], [55] (noted as⊕) is defined as

    TABLE I NOTATION TABLE

    TABLE II A CREDAL PARTITION ON Ω={ω1,ω2,ω3}

    IV.METHOD: SRMVEC

    The SRMVEC algorithm consists of two parts: identify the cluster centers and create a credal partition for the remaining objects, which are introduced in Sections IV-A and IV-B,respectively.

    A. Identifying the Cluster Centers

    Basic Idea: We must first determine the “possibility” that each object is a cluster center before we can detect the number and locations of “cluster centers”.Such a “possibility” of a“cluster center” should be high, and its “separation” from other “cluster centers” should be as strong as it can be.In light of this, we can choose those objects as the “cluster centers”that have a high “possibility” and large “separation”.It is worth noting that a “cluster center” refers to the most representative object in a cluster, rather than the object spatially located around the geometric centroid of the cluster.

    Specific Procedure:To mathematically propose the two definitions of “possibility” and “separation”, the affinity matrix of multi-view objects should be learned first.Following the idea that data from various views come from the same latent space, we define the learning of affinity matrix W as

    Remark 1(Some explanations about(7)): We use the nonlinear function e xp(·) because it has been widely used in other literature about evidence theory (e.g., [56], [57]), and satisfies

    Fig.2.An illustrative workflow of SRMVEC.The dataset has p=3 views colored green, yellow and red.The affinity matrix W ? is learned using the two-step iterative algorithm, and then cluster centers are selected from the upper right corner of the Sep-Pos chart.Using the gradient-based algorithm presented in Section V-B, stress function S SRMVEC(v) (12) is minimized and the credal partition MΩ is created.Concretely, the mass functions of n objects belonging to various clusters (i.e., M Ω) are determined as closely as possible to the calculated distance matrix D =(dij).Each mass m Ωi (A) is interpreted as a degree of believing the proposition “the true cluster of object xi is A”.The corresponding hard partition is also obtained using the plausibility-probability transformation defined in (5).

    By combining all the|P(xi)| mass functionsusing (3),we give the following definition.

    Definition 1: For eachxi, its possibility (Pos) of becoming a cluster center is defined as

    We can choose the objects with highPosas the potential cluster centers based on Definition 1 and the fundamental idea of SRMVEC.Next, the “separation” between one object and others must be measured using another different metric.

    Definition 2: The separation (Sep) betweenxiand other objects is defined as

    ifxidoes not have the highestPosamong the dataset; otherwise,

    The two semantic terms “possibility” and “separation” have now been formulated as two metrics.All the multi-view objects can be mapped to a 2-dimensionalSep-Poschart, as shown in the middle of Fig.2, by usingS epandPosas the horizontal and vertical coordinates, respectively.The cluster centers (colored red), which are clearly separated from the other objects because of the highPosandS ep, can be visually identified by humans.As a result, the cluster number is also provided.

    Remark 2: Why do we use the way of evidence combination (3) instead of simple addition or multiplication to fuse the

    Remark 3: Why do we use a human intervention method through theSep-Poschart instead of an automatic method to detect the cluster centers? Because the clustering task is unsupervised, there may not be a perfectly correct number of clusters in some real-world application scenarios.Users can obtain direct guidance on the clustering centers through theSep-Poschart, and can subjectively choose different numbers of clusters for different applications.In addition, as shown in Section VI-B, the correct numbers of cluster centers are always easily distinguished on theSep-Poscharts for those commonly used multi-view benchmark datasets.

    B. Deriving a Credal Partition

    Extension to Graph-based Multi-View Learning and Other Evidential Multi-View Learning Methods: First, we highlight that the performances of our method SRMVEC and the graphbased methods depend significantly on the learned similarity coefficients between multi-view objects.In other words, the technique of learning graphs for multi-view data can also be used in SRMVEC to learn the affinity matrix W?, and the possibility of each object xibecoming a cluster center can then still be calculated according to (7) and (8).Besides, evidence theory is used in SRMVEC to both fuse information from multiple sources (fusing the degree of support provided byxjfor xito become a cluster center) and to describe the uncertainty in the learning process (e.g.,(Θ) in (7) denotes the degree of uncertainty for xito become a cluster center, and(Ω) in the credal partition MΩdenotes the uncertainty in clustering membership).This means that the way we use evidence theory can be extended to measure the uncertainty of the Dirichlet distribution for each view as done in [52] and to fuse multi-view information as done in [51].

    V.OPTIMIZATION AND COMPLEXITY ANALYSIS

    The optimization problem (6) and the minimization of function (12) are solved in Sections V-A and V-B, respectively.The complexity analysis of SRMVEC is shown in Section V-C.

    A. Solving Problem (6)

    We design the following two-step interactive algorithm.

    UpdateWWith Fixedas: We first rewrite the objective functionL(W,as) of problem (6) as

    B. Minimizing Function S SRMVEC(MΩ,ρ1,ρ2) in (12)

    We adopt the following gradient-based method to minimize SSRMVEC(MΩ,ρ1,ρ2) with respect to MΩ, ρ1and ρ2.

    Initialize the mass functions concerning theucluster centers as

    If SSRMVEC(v) has increased between τ-1 andτ, all χeare decreased

    andveis updated by

    We summarize the SRMVEC algorithm in Algorithm 1 and provide an illustrative flow chart in Fig.2.In Algorithm 1, we divide the whole algorithm into 4 steps, i.e., 1) learning the affinity matrix W?, 2) calculating theS epandPosof each object based on the obtained W?, 3) selecting the cluster centers after mapping objects onto theSep-Poschart usingS epandPosas horizontal and vertical coordinates, and 4) minimizing the stress function SSRMVECto obtain the credal partition MΩ.In Fig.2, the first 3 steps are included in the “Identify the cluster centers” part and the 4th step is shown in the“Create a credal partition MΩ” part.

    Algorithm 1 SRMVEC Algorithm Input: Data objects , , α and β x1,x2,...,xnδ=2 Output: Credal partition MΩ ?W 1: Learn the affinity matrix by solving problem (6);Pos S ep 2: Calculate and for each object according to (8) and (9),respectively;3: Map the objects to the Sep-Pos chart and detect u cluster centers;SSRMVEC(MΩ,ρ1,ρ2) MΩ 4: Minimize and obtain

    C. Complexity Analysis

    Remark 5: Tuningαandβ.We can see from problem (6)that the setting ofαandβinfluences the learning for W?.As a result, the object distribution in theSep-Poschart is in turn influenced by their setting.Users can alternately tuneαandβuntil the cluster centers can be visually recognized from other objects in theSep-Poschart, when the cluster centers are not immediately distinguishable.The tuning forαandβis instructed by the chart.The distribution of the objects in the chart is changed as a result of the tuning ofαandβ.Users can use the chart as a guide to progressively discover the cluster centers.In Section VI-C, we provide an example of how the tuning ofαandβworks.

    VI.EXPERIMENTS

    This section consists of 3 subsections.The comparison between SRMVEC with other SoTA MVC algorithms is shown in Section VI-A and detailed ablation studies are conducted in Section VI-B to show the contribution of each component in SRMVEC.The empirical convergence analysis, the tuning of hyperparameters and visualization of clustering results are shown in Section VI-C.The ACC [61] and NMI are used to evaluate the clustering performances.The credal partition created by SRMVEC is converted into a hard partition according to (5), and then we calculate the ACC and NMI.In SRMVEC, the power exponentδis fixed to 2,αandβvary in { 10-5,10-4,...,100}.As summarized in Table III, the used benchmark datasets include BBCSport, Reuters, COIL-20, WebKB2, Digit, 3Source, Wikipedia, Yale, Caltech101,Reuters, Animal10 and CIFAR10, which are also widely used in other related references, e.g., [17], [44], [62].The small version of Reuters is denoted as Reuterss.The experiments on 4 large datasets, i.e., {Caltech101, Reuters, Animal10,CIFAR10}, are performed on an Intel(R) Xeon(R) Gold 6230 CPU @ 2.10 GHz with 256 GB RAM, while the experiments on other datasets are performed on an Inter(R) Core(TM) i5-7300HQ @2.50 GHz CPU with 16 GB RAM.The experiments concerning the deep-leaning-algorithm are performedon a NVIDIA Tesla A100.

    TABLE III BENCHMARK DATASETS.WE RUN SRMSIM INSTEAD OF SRMVEC ON THE {COIL20, YALE, CALTECH101, CIFAR10, ANIMAL10} WITH A LARGER NUMBER OF CLUSTERS, AS DISCUSSED IN REMARK 4 ON CALTECH101, ONLY THE SINGLE CLUSTERS AND Ω ARE CONSIDERED

    A. Comparison Experiment

    Comparing Algorithms: SRMVEC is compared with 14 SoTA MVC algorithms, including 2 subspace clustering algorithms (MSSC [63] and MLRSSC [17]), MvWECM [18]based on evidence theory, CGL [39] based on spectral clustering, MKCSS [41] based on Kernel Clustering, MCLES [40]based on non-negative matrix factorization and 8 deep-learning algorithms (AE2Net [16], DeepNMF [20], MvDSCN [64],DEMVC [65], DMJCT [66], MFLVC [60], SAMVCH [67]and SDMVC [19]).

    Experimental Set-Up:We adopt the provided code of these algorithms and the hyperparameters are tuned as suggested in the original papers to generate the best results.We feed the correct number of clusters to the comparing algorithms.

    In Table VII, the ACC and NMI values are reported.Compared with the 2 subspace clustering algorithms (MSSC and MLRSSC), SRMVEC has significantly better performance on most datasets.It is because that SRMVEC can learn a more precise affinity matrix than them, as discussed in the ablation study.The better performance achieved by SRMVEC compared to the CGL, MKCSS and MCLES may be due to the guidance from the detected cluster centers in the partitioning process.Because MvWECM performs a weighted average calculation of the partition matrix of each view to obtain the final partition matrix and the correlation between views is not reasonably considered, SRMVEC has better performance.One can also find SRMVEC achieves statistically higher ACC/NMI values than the 8 deep-learning algorithms (AE2Net,DeepNMF, MVDSCN, DEMVC, DMJCT, MFLVC, SAMVCH and SDMVC) on 57.2 % (110/192) cases, and SRMVEC is defeated by these deep-learning algorithms on 3.12%(6/192) cases.It suggests that the credal partition created by SRMVEC has better fault tolerance than the hard partitions,although the deep-learning algorithms may be able to obtain a more reasonable representation of multi-view data.In summary, SRMVEC is statistically superior to the other algorithms in 70.2% (118/168) and 68.5% (115/168) cases in terms of ACC and NMI, respectively.

    Because the deep-learning algorithms are performed on GPU, we only report the running time of SRMVEC and the remaining comparing algorithms in Table VIII.MvWECM and SRMVEC consider focal sets with the same form when deriving a credal partition.For a fair comparison, the time spent on detecting the cluster centers through human intervention is not accounted for in SRMVEC.Compared to {MSSC,MLRSSC, MCLES, MvWECM}, SRMVEC consumes more time on the Reuters and Yale datasets with high-dimensional views due to thecomplexity.SRMVEC is more efficient with larger datasets (e.g., Animal10) that have fewer dimensions, as its complexity ofO(n2) is lower than theO(n3)complexity of other algorithms.Besides, SRMVEC requires slightly more time than the lower-complexity c-means-based MvWECM algorithm on {Yale, Reuters, CIFAR} datasets.

    Comparision Result Between SRMVEC With its Simple VersionSRMsim: As shown in the caption of Table III, we run SRMsiminstead of SRMVEC on the {COIL20, Yale, Caltech101, CIFAR10, Animal10} datasets that have {20, 15, 101,10, 10} clusters, to avoid the heavy complexity with respect toO(2u)(as discussed in Remark 4).In this section, we compare SRMsimand SRMVEC on the other 7 datasets.The comparison results are shown in Fig.5.

    One can see that SRMVEC has better performance than SRMsim(in terms of ACC and NMI) on all the datasets except for WebKB20 with 2 clusters, because SRMVEC degenerates to SRMsimwhen the dataset has only 2 clusters.In addition,the simplified SRMsimconsumes less time than the SRMVEC algorithm.On the Digit dataset with 10 clusters, SRMsimcosts nearly one-third of the time of SRMVEC.On the Retuers dataset, SRMVEC (1958.4 s) consumes more than 3 times the running time of SRMsim(563.2 s), but only obtains the 0.007 improvement in terms of ACC/NMI.This shows that even if we follow the strategy provided in Remark 4 to simplify the form of credal partition, i.e., retaining only the clusters with acardinality lower than 3 and the ignorance cluster, the clustering performance does not degrade significantly, but much runtime is saved.On the datasets with many clusters in real-world scenarios, SRMsimallows users to obtain a fast clustering result.

    TABLE IV NUMBER OF CLUSTERS ESTIMATED BY DIFFERENT SRMVEC VARIANTS.THE INCORRECT CLUSTER NUMBERS ARE SHOWN IN PARENTHESES.THE ACCURACY SHOWN IN THE LAST COLUMN IS THE PROPORTION OF DATASETS WHERE THE CORRECT CLUSTER NUMBERS ARE FOUND

    TABLE V ACC AND NMI (MEANSTD.DEVIATION) VALUES OF SRMMCOI, SRMSMP, SRMMFL, SRMSDM THAT CALCULATE THE AFFINITY MATRIX BASED ON THE METHODS PROPOSED IN [19], [58]-[60].THE BEST RESULTS ARE BOLD AND UNDERLINED

    B. Ablation Studies

    Benefits of the Sparse Reconstructive Procedure(6)and Evidence Combination(8): We compare the performance of estimating the cluster numbers between SRMVEC with its 14 variants.In each variant, we tune finely theαandβusing the strategy provided in Remark 5.Concretely, we consider

    ●2variantsSRMSSCandSRMLRreplacing thelearningof affinity matrixW?through solving problem(6) withthe methods used in [63] (i.e., solving problem (1)) and [17], respectively;

    ● 4 vari ants where the affinity matrixis replaced with the similarity matrix calculated used in [58], [59] (denoted as SRMmco1and SRMSMP, respectively), and is calculated based on the features learned in [19] (SRMSDM) and [60] (SRMMFL);

    ● 2 variants SRMg1and SRMg2adopting the graph-based adjacency matrix according to the methods used in [68] and[69], respectively;

    ● The variant SRMnosadopting the objective function in problem (6) but ignoring the item α ‖W‖1;to fuse the information provided by xj∈P(xi);

    TABLE VI ACC AND NMI (MEANSTD.DEVIATION) VALUES OF DIFFERENT SRMVEC VARIANTS.EACH OF THE VARIANTS IS RUN 20 TIMES.SRMSC AND SRMKKM DERIVE HARD PARTITIONS.SRMKFC CREATES A FUZZY PARTITION.SRMNOC INITIALIZES THE CREDAL PARTITION RANDOMLY INSTEAD OF USING THE WAY SHOWN IN (15).SRMNOR IGNORES THE LEARNING OF COEFFICIENTS ρ1 AND ρ2 IN FUNCTION (12)

    ● The variant SRMdimusing the strategy to control the complexity discussed in Remark 4, i.e., choosing only half of the total dimensions in thesth view to perform similarity calculation for each.

    The cluster numbers estimated by different SRMVEC variants are shown in Table IV, from which one can see that SRMVEC can estimate the true number of clusters on all 12 datasets.Comparing SRMVEC with SRMSSCand SRMLR, we can find that these two variants only achieveAccuracy=66.7% andAccuracy=75% respectively whereas SRMVEC detects the correct cluster numbers on all the 12 datasets.It indicates that the affinity matrix learned from the sparse reconstructive procedure is more precise than those from the standard self-expression methods used in [17] and [63].Compared to SRMg1and SRMg2that use the graph-based adjacency matrices, SRMVEC is still the best one because it considers the related information between dimensions within each view when learning the affinity matrix.In addition, the affinity matrix learned by SRMVEC is not obtained by doing element-averaging calculation for the affinity matrix from each view, but is obtained through a joint learning process.

    After ignoring the related information between dimensions in problem (6), the SRMnodonly finds the correct number of clusters on 5 datasets.Thus, considering the related information between intra-view dimensions in the learning of affinity matrix can substantially improve the performance of estimating the cluster numbers.One can also find that the ?1term in the objective function of problem (6) is also essential, because SRMnosonly estimates the true cluster number on 7 datasets.Focusing on SRMa, this variant also shows weaker performance than SRMVEC, indicating the critical role of view weight learning.

    Besides, we can see that SRMVEC outperforms both SRM+and SRM×.It demonstrates that combining the information supplied by xj∈P(xi) using the evidence combination is preferable to utilizing the additive/multiplicative methods.Such an observation supports the finding outlined in Remark 2.In 10 datasets, the SRMdimalgorithm can identify the true number of clusters.This finding suggests that, in the majority of situations, reducing the computation of the dimension similarity matrices G(s), as demonstrated in Remark 4, does not significantly degrade the performance of estimating the cluster number.Focusing on the 4 variants {SRMmco1, SRMSMP,SRMMFL, SRMSDM} , only SRMSDMcan find the correct number of clusters on all datasets.This suggests that calculating the affinity matrix based on the methods proposed in[58]-[60] is not suitable for probing cluster centers using the definitionsSepandPos.

    We next further explore whether the affinity matrix learned by the proposed sparse reconstruction method is more appropriate for creating the credal partition.The ACC and NMI values of the 4 variants are shown in Table V, where the matrix D calculated by (11) is replaced with the affinity matrices calculated in [19], [58]-[60].Note that the same initialization strategy shown in Section V-B is used in these 4 variants.As can be seen, SRMVEC has better performance than these 4 variants except for Yale and Reuters datasets.In particular,SRMVEC has higher ACC and NMI on all datasets than SRMSDMwhich has the same performance in finding cluster centers.This suggests that the affinity matrix learned through solving problem (6) is more suitable for generating a credal partition.

    Benefits of Identifying the Cluster Centers and Adopting the Credal Partition Computed by Minimizing Function(12): In this experiment, we compare the performances of grouping objects between SRMVEC and its 5 variants.Treating the distance matrix D=(dij) (calculated based on (11)) and the true cluster number as input, the variants include

    ● SRMKFCthat derives a fuzzy partition through kernel fuzzy c-means (KFCM);

    ● SRMSCand SRMKKMthat derive hard partitions throughspectral clustering (SC) and kernel k-means (KKM);

    TABLE VII ACC, NMI (MEANSTD.DEVIATION) AND RUNNING TIME (IN SECOND) OF DIFFERENT ALGORITHMS.THE ●/○ INDICATES WHETHER SRMVEC IS STATISTICALLY SUPERIOR/INFERIOR TO A CERTAIN COMPARING ALGORITHM BASED ON THE PAIRED T-TEST AT A 0.05 SIGNIFICANCE LEVEL.THE STATISTICS OF WIN/TIE/LOSS ARE SHOWN IN THE LAST COLUMN OF THE FIRST 2 SUB-TABLES.TO SAVE SPACE, EACH DATASET IS REPRESENTED AS THE 3 LETTERS OF THE CORRESPONDING NAME

    ● SRMnocthat initializes the credal partition randomly instead of using the way shown in (15), meaning that the generation of a credal partition in SRMVEC is no longer guided under the information provided by the detected cluster centers;

    ● SRMnorthat ignores the learning of coefficients ρ1andρ2in (12).Thus, SRMnorcan not reduce the magnitude gap between thevalue anddi jthrough linear variation.In SRMnor, the detected cluster centers are the same as those in SRMVEC.

    We report the average ACC and NMI values of these algorithms in Table VI, where the best results are bold and underlined.First, SRMVEC achieves the highest ACC and NMI values on all 12 datasets.More specifically, the performance differences between SRMVEC and two hard-partitioning variants (SRMSCand SRMKKM) are more pronounced on Reuterssand Reuters, which has high-dimensional views and more multi-view objects are located in the overlapping areas.This result confirms that the credal partition improves the fault tolerance of SRMVEC, i.e., describes the ambiguity and uncer-tainty in the cluster memberships of multi-view objects more appropriately.As discussed in Section III, the credal partition allows the objects to be contained in the composite clusters rather than only in the single clusters.The benefit of using credal partition can also be demonstrated when comparing SRMVEC with the fuzzy-partitioning SRMKFC.One can find that SRMVEC always has the best performance when it is compared with SRMnoc.This demonstrates that the performance of credal partition can be directly improved under the guidance of the information in the detected cluster centers.Furthermore, the comparison between SRMVEC and SRMnoralso shows that the learning of the coefficients ρ1and ρ2is critical.This is because the magnitude difference betweendiland the conflictbecomes large after losing the linear variation provided by ρ1and ρ2, and then the gradient-based algorithm used to minimize the function (12) often falls into a local minimum.

    Fig.3.An example of tuning α and β on 3Source and dataset.From the leftmost Sep-Pos charts to the rightmost Sep-Pos charts, the cluster centers become easier to be distinguished from the other objects.

    C. Specific Behaviors of SRMVEC

    An Example About Tuning α and β:As discussed in Remark 5, we give an example about tuning hyperparametersαandβin Fig.3.The objects in the upper-right corners of theSep-Poscharts are not easily distinguishable from the other objects in the leftmost sub-figures.The choice of cluster centers appears to be challenging and unrealistic.We then fix theβ=1×10-3and gradually turn up theαfrom 1 ×10-5.Observing theSep-Poscharts (shown in the 2nd and 3rd columns) generated by each group of {α,β}, some objects (cluster centers) are gradually separated from other objects.In a determined hyperparameter configuration, fixing one of the {α,β} and tuning the other one always allow users to estimate the correct cluster numbers.

    Empirical Convergence Analysis: We first show the convergence plots of solving problem (6) in Figs.4(a.1)-4(a.4),where the |Lτ+1(W,as)-Lτ(W,as)| vaule between (τ+1)th andτth iterations is presented.As can be seen, the convergence of the two-step iterative algorithm proposed in Section V-A is illustrated by the gradual decrease in the value of|Lτ+1(W,as)-Lτ(W,as)|between two adjacent iterations.We also provide the convergence plots of minimizing SSRMVEC(MΩ,ρ1,ρ2)in Figs.4(b.1)-4(b.4), where the value of SSRMVEC(MΩ,ρ1,ρ2) in every iteration is shown.It is a pparent that the difference value ofSSRMVEC(MΩ,ρ1,ρ2)between 2 adjacent iterations gradually decreases to 0, illustrating the convergence of the gradient-based algorithm proposed in Section V-B.As the optimization procedure proceeds, the ACC of SRMVEC also generally increases.On the BBCSport, Reuters and COIL20 datasets,SSRMVEC(MΩ,ρ1,ρ2)always decreases slowly, then rapidly, then slowly again, indicating that the solution of minimizingSSRMVEC(MΩ,ρ1,ρ2)goes from the flat region to the sharp region, and then to the flat region.

    Visualize the Clustering Results: In Fig.6, we visualize the clustering results on the 4 datasets using the t-SNE method[70], where the Euclidean distances between objects are replaced by the distances calculated according to (11).After visualization, the objects clearly form reasonable clusters on the 2-dimensional figures, and the selected cluster centers are also in reasonable locations.It illustrates that one cluster can indeed be represented by a representative cluster center selected according to the basic idea proposed in this paper.

    VII.CONCLUSION

    Fig.4.The value of |Lτ+1(W,as)-Lτ(W,as)| in each iteration (a.1)-(a.4).As the optimization algorithm proeeds, the value of |Lτ+1(W,as)-Lτ(W,as)|decreases gradually.The SSRMVEC(MΩ,ρ1,ρ2) value and ACC of SRMVEC in each iteration (b.1)-(b.4).The SSRMVEC(MΩ,ρ1,ρ2) and ACC are colored red and green.As the gradient-based algorithm proceeds, the S SRMVEC(MΩ,ρ1,ρ2) value decreases gradually and the ACC increases.

    Fig.5.Comparision between SRMsim and SRMVEC in terms of ACC, NMI and running time.On the 7 datasets, SRMsim has slightly worse clustering performance than SRMVEC but consumes less running time.Each dataset is abbreviated by its first 3 letters.

    This paper proposes a novel sparse reconstructive evidential clustering algorithm named SRMVEC.By using a 2-dimensional chart and human interaction, the number of clusters can be quickly determined.This may be the first attempt,as far as we are know, to estimate the number of clusters in the MVC problem.Moreover, SRMVEC improves the clustering by creating a credal split that takes into account more ambiguity and uncertainty in the assignment of multi-view objects.The sparse reconstructive technique, evidence theory,and the discovered cluster centers are proven to be advantageous for SRMVEC.The great clustering performance of SRMVEC is demonstrated by experiments on 12 benchmark datasets.When other MVC algorithms need to detect the number of clusters but have no prior knowledge of the data being studied, SRMVEC can also be used as a precursor step.

    Fig.6.Clustering results visualized by using t-SNE on 2 datasets.The found cluster centers are marked by red crosses and are in suitable locations.

    成人国产一区最新在线观看| 黄色 视频免费看| 午夜福利视频精品| 99热网站在线观看| 老司机午夜十八禁免费视频| 69精品国产乱码久久久| 美女主播在线视频| av片东京热男人的天堂| 免费久久久久久久精品成人欧美视频| 久久影院123| 国产成人免费无遮挡视频| 欧美乱码精品一区二区三区| 国产在视频线精品| 老司机亚洲免费影院| 亚洲五月色婷婷综合| 飞空精品影院首页| 欧美+亚洲+日韩+国产| 日本欧美视频一区| 久久天堂一区二区三区四区| 欧美激情极品国产一区二区三区| 极品教师在线免费播放| 亚洲黑人精品在线| 欧美日韩视频精品一区| 亚洲人成伊人成综合网2020| 黑人猛操日本美女一级片| 欧美久久黑人一区二区| 黄色毛片三级朝国网站| 成人永久免费在线观看视频 | 国产一区二区三区综合在线观看| 视频区欧美日本亚洲| 极品教师在线免费播放| 母亲3免费完整高清在线观看| 久久午夜亚洲精品久久| 日本wwww免费看| 国产精品 欧美亚洲| 人人妻人人添人人爽欧美一区卜| 无限看片的www在线观看| 亚洲精品国产色婷婷电影| 国产精品电影一区二区三区 | 午夜福利,免费看| 天堂动漫精品| 亚洲九九香蕉| 最近最新中文字幕大全免费视频| 久久午夜亚洲精品久久| 日韩中文字幕视频在线看片| 国产精品一区二区在线不卡| 色综合婷婷激情| 水蜜桃什么品种好| 午夜视频精品福利| 新久久久久国产一级毛片| 久久毛片免费看一区二区三区| 捣出白浆h1v1| 久久中文字幕一级| 久久精品成人免费网站| 亚洲熟女精品中文字幕| 久久久久久亚洲精品国产蜜桃av| www.熟女人妻精品国产| 亚洲av片天天在线观看| 中文亚洲av片在线观看爽 | 99国产极品粉嫩在线观看| 国产淫语在线视频| www.999成人在线观看| 狠狠婷婷综合久久久久久88av| 嫁个100分男人电影在线观看| aaaaa片日本免费| 成人18禁在线播放| 50天的宝宝边吃奶边哭怎么回事| 9191精品国产免费久久| 黄色成人免费大全| 老熟妇仑乱视频hdxx| 丁香欧美五月| 十八禁网站网址无遮挡| 免费观看av网站的网址| 免费人妻精品一区二区三区视频| 精品国产一区二区久久| 久久久国产一区二区| 国产99久久九九免费精品| 午夜91福利影院| av免费在线观看网站| 伊人久久大香线蕉亚洲五| 日韩视频在线欧美| 亚洲成av片中文字幕在线观看| 欧美日韩亚洲综合一区二区三区_| 又大又爽又粗| 亚洲伊人久久精品综合| 亚洲情色 制服丝袜| 一本久久精品| 亚洲成人免费电影在线观看| 亚洲全国av大片| 天天躁狠狠躁夜夜躁狠狠躁| 99在线人妻在线中文字幕 | 久久人妻福利社区极品人妻图片| 18在线观看网站| 99久久国产精品久久久| 亚洲avbb在线观看| 亚洲午夜理论影院| 999精品在线视频| 波多野结衣av一区二区av| 国产一卡二卡三卡精品| 香蕉丝袜av| 青草久久国产| 777米奇影视久久| 久久午夜综合久久蜜桃| 欧美精品啪啪一区二区三区| 三上悠亚av全集在线观看| 久久久精品免费免费高清| 国产一区二区三区在线臀色熟女 | 久久久精品免费免费高清| 母亲3免费完整高清在线观看| 高清av免费在线| 亚洲色图av天堂| 国产不卡一卡二| 男女之事视频高清在线观看| 成人国产av品久久久| 考比视频在线观看| 中国美女看黄片| 美女扒开内裤让男人捅视频| 亚洲午夜精品一区,二区,三区| 丁香欧美五月| 最近最新中文字幕大全免费视频| 午夜激情久久久久久久| 欧美激情久久久久久爽电影 | 日韩大码丰满熟妇| 国产男女内射视频| 欧美国产精品va在线观看不卡| 免费少妇av软件| 亚洲欧美一区二区三区黑人| 精品欧美一区二区三区在线| 色播在线永久视频| 纯流量卡能插随身wifi吗| 亚洲精品粉嫩美女一区| 最黄视频免费看| 黄色毛片三级朝国网站| www.自偷自拍.com| 成在线人永久免费视频| 成人国产av品久久久| 国产午夜精品久久久久久| 亚洲色图 男人天堂 中文字幕| 亚洲 欧美一区二区三区| 午夜视频精品福利| 女人久久www免费人成看片| 在线观看免费午夜福利视频| 久久国产精品人妻蜜桃| 丰满饥渴人妻一区二区三| 免费在线观看完整版高清| 中文字幕另类日韩欧美亚洲嫩草| 欧美乱妇无乱码| 国产日韩一区二区三区精品不卡| 国产欧美日韩一区二区三区在线| 日韩欧美一区视频在线观看| 99国产精品一区二区蜜桃av | 脱女人内裤的视频| 12—13女人毛片做爰片一| 日韩三级视频一区二区三区| 日本一区二区免费在线视频| 王馨瑶露胸无遮挡在线观看| 97在线人人人人妻| 亚洲av第一区精品v没综合| 精品欧美一区二区三区在线| 欧美日韩精品网址| 考比视频在线观看| 国产色视频综合| 久久人妻av系列| 少妇粗大呻吟视频| 色94色欧美一区二区| av片东京热男人的天堂| 精品久久久精品久久久| 一个人免费看片子| 国产免费福利视频在线观看| 欧美另类亚洲清纯唯美| 午夜激情av网站| 国产91精品成人一区二区三区 | 999久久久精品免费观看国产| 考比视频在线观看| 国产精品成人在线| 丁香六月欧美| av天堂在线播放| 最近最新中文字幕大全免费视频| 免费av中文字幕在线| 亚洲欧美日韩另类电影网站| 十八禁网站免费在线| 国产麻豆69| 欧美精品av麻豆av| 麻豆乱淫一区二区| 19禁男女啪啪无遮挡网站| 午夜老司机福利片| 中文字幕精品免费在线观看视频| 欧美精品高潮呻吟av久久| 亚洲国产欧美一区二区综合| 两个人看的免费小视频| 欧美日韩亚洲高清精品| 国产av精品麻豆| 国产亚洲欧美精品永久| 欧美成人午夜精品| 国产三级黄色录像| av片东京热男人的天堂| 一边摸一边抽搐一进一出视频| 18禁国产床啪视频网站| 麻豆国产av国片精品| 一区二区三区国产精品乱码| 国产成人系列免费观看| 天天操日日干夜夜撸| 久久午夜综合久久蜜桃| 午夜精品国产一区二区电影| 黑丝袜美女国产一区| 中文字幕色久视频| 久久国产精品影院| 国产在线视频一区二区| 国产一区二区在线观看av| 免费在线观看视频国产中文字幕亚洲| 一区二区日韩欧美中文字幕| 91国产中文字幕| 久久 成人 亚洲| 在线观看一区二区三区激情| 电影成人av| 精品一区二区三区av网在线观看 | 国产免费福利视频在线观看| 国产高清videossex| 久久婷婷成人综合色麻豆| 亚洲人成77777在线视频| 婷婷丁香在线五月| 动漫黄色视频在线观看| 久久精品人人爽人人爽视色| 欧美一级毛片孕妇| 国产亚洲av高清不卡| 中文字幕人妻丝袜一区二区| 亚洲国产av新网站| www日本在线高清视频| 久久毛片免费看一区二区三区| 亚洲人成电影免费在线| 一区二区三区乱码不卡18| 久久狼人影院| 中国美女看黄片| 国产精品久久电影中文字幕 | 国产成人av激情在线播放| 日日夜夜操网爽| 亚洲精品在线观看二区| 人妻 亚洲 视频| 黄色视频,在线免费观看| 日本撒尿小便嘘嘘汇集6| 99九九在线精品视频| 精品人妻1区二区| 香蕉丝袜av| 久久久久视频综合| 欧美人与性动交α欧美软件| 丁香欧美五月| 最近最新中文字幕大全免费视频| 高清av免费在线| 水蜜桃什么品种好| 国产精品久久久人人做人人爽| 高清毛片免费观看视频网站 | 免费少妇av软件| 99国产精品一区二区蜜桃av | 日日夜夜操网爽| 日韩免费av在线播放| 黄色视频不卡| 欧美日韩一级在线毛片| 久久99热这里只频精品6学生| 欧美日韩亚洲高清精品| 亚洲精品粉嫩美女一区| bbb黄色大片| 亚洲av成人一区二区三| 日本五十路高清| 亚洲国产精品一区二区三区在线| 老司机午夜福利在线观看视频 | 国产成人系列免费观看| 久久人妻熟女aⅴ| 免费久久久久久久精品成人欧美视频| 欧美日韩精品网址| 亚洲,欧美精品.| 热99re8久久精品国产| 两性午夜刺激爽爽歪歪视频在线观看 | 精品国产超薄肉色丝袜足j| 久久精品国产亚洲av香蕉五月 | 欧美日韩精品网址| 亚洲精品久久成人aⅴ小说| 国产精品 国内视频| 国产aⅴ精品一区二区三区波| 蜜桃国产av成人99| 午夜福利免费观看在线| 亚洲精品乱久久久久久| 黄色视频在线播放观看不卡| 国产精品国产av在线观看| 满18在线观看网站| 久久久久精品国产欧美久久久| 美女视频免费永久观看网站| 成人免费观看视频高清| 亚洲成a人片在线一区二区| www日本在线高清视频| 1024香蕉在线观看| 免费在线观看黄色视频的| 99九九在线精品视频| 久久狼人影院| 中国美女看黄片| 黄色毛片三级朝国网站| 丁香六月天网| 国产精品久久久久久人妻精品电影 | 高清欧美精品videossex| 精品国产一区二区三区四区第35| 亚洲精华国产精华精| 丰满迷人的少妇在线观看| 黑人巨大精品欧美一区二区蜜桃| 日韩欧美免费精品| 国产高清videossex| 久久婷婷成人综合色麻豆| 午夜福利在线免费观看网站| 日韩一卡2卡3卡4卡2021年| 成年人午夜在线观看视频| 久久毛片免费看一区二区三区| 亚洲天堂av无毛| 一本大道久久a久久精品| 他把我摸到了高潮在线观看 | 每晚都被弄得嗷嗷叫到高潮| 天天添夜夜摸| 国产男女内射视频| 精品国产超薄肉色丝袜足j| 国产精品av久久久久免费| 9色porny在线观看| 亚洲美女黄片视频| 丝袜人妻中文字幕| 国产精品电影一区二区三区 | 蜜桃在线观看..| 黄色片一级片一级黄色片| 热99国产精品久久久久久7| 在线永久观看黄色视频| 操出白浆在线播放| 久久久久网色| 日韩欧美三级三区| 国产国语露脸激情在线看| 久久精品亚洲精品国产色婷小说| 国产淫语在线视频| 欧美成狂野欧美在线观看| 成人永久免费在线观看视频 | 色综合欧美亚洲国产小说| 免费在线观看视频国产中文字幕亚洲| av欧美777| 国产欧美日韩综合在线一区二区| 国产精品国产高清国产av | 国产成人精品在线电影| 成人三级做爰电影| 久久久久久免费高清国产稀缺| 日韩熟女老妇一区二区性免费视频| 亚洲国产欧美在线一区| 久久久久久免费高清国产稀缺| 俄罗斯特黄特色一大片| 夫妻午夜视频| 国产成人欧美| 国产免费福利视频在线观看| 两个人看的免费小视频| 女人高潮潮喷娇喘18禁视频| 久久久久久久久久久久大奶| 亚洲 欧美一区二区三区| 啦啦啦免费观看视频1| 久久久久国内视频| 日日爽夜夜爽网站| 天天影视国产精品| 亚洲五月婷婷丁香| 国产成人精品久久二区二区免费| 国产真人三级小视频在线观看| 91国产中文字幕| av电影中文网址| 男女床上黄色一级片免费看| 操美女的视频在线观看| 少妇 在线观看| 亚洲精品国产精品久久久不卡| 日本撒尿小便嘘嘘汇集6| 亚洲精品成人av观看孕妇| 露出奶头的视频| 男女高潮啪啪啪动态图| 日韩制服丝袜自拍偷拍| 夜夜爽天天搞| 91精品国产国语对白视频| 在线观看免费视频网站a站| 免费在线观看黄色视频的| 国产精品1区2区在线观看. | 免费黄频网站在线观看国产| 亚洲av国产av综合av卡| 咕卡用的链子| 欧美黑人欧美精品刺激| 精品久久蜜臀av无| 美女视频免费永久观看网站| 男女之事视频高清在线观看| 精品国产乱子伦一区二区三区| 精品亚洲成a人片在线观看| 国产精品偷伦视频观看了| 欧美 亚洲 国产 日韩一| 国产精品久久久人人做人人爽| 性少妇av在线| 热re99久久精品国产66热6| 一夜夜www| 亚洲色图综合在线观看| 亚洲精品久久成人aⅴ小说| 日韩有码中文字幕| 久热爱精品视频在线9| 久久精品人人爽人人爽视色| 国产片内射在线| 亚洲美女黄片视频| 后天国语完整版免费观看| 女性被躁到高潮视频| 亚洲精品美女久久av网站| 777米奇影视久久| 老熟妇仑乱视频hdxx| 色精品久久人妻99蜜桃| 午夜91福利影院| 亚洲久久久国产精品| 一级片'在线观看视频| 久久久久久久大尺度免费视频| 久久天躁狠狠躁夜夜2o2o| 国产亚洲精品一区二区www | 日韩熟女老妇一区二区性免费视频| svipshipincom国产片| 国产日韩欧美亚洲二区| 久久性视频一级片| 国产精品一区二区精品视频观看| 久久狼人影院| 视频区图区小说| 女同久久另类99精品国产91| 亚洲欧美激情在线| 90打野战视频偷拍视频| 正在播放国产对白刺激| 欧美av亚洲av综合av国产av| 午夜精品久久久久久毛片777| 亚洲午夜精品一区,二区,三区| 亚洲av电影在线进入| 国产在线精品亚洲第一网站| 久久国产精品男人的天堂亚洲| 久9热在线精品视频| 一二三四在线观看免费中文在| 叶爱在线成人免费视频播放| 高潮久久久久久久久久久不卡| 巨乳人妻的诱惑在线观看| 黄色片一级片一级黄色片| 热re99久久精品国产66热6| 久久亚洲真实| 成人永久免费在线观看视频 | 黑人欧美特级aaaaaa片| 久久精品熟女亚洲av麻豆精品| 亚洲伊人久久精品综合| 久久婷婷成人综合色麻豆| 久久免费观看电影| av电影中文网址| 美女高潮到喷水免费观看| 一级毛片电影观看| 国产日韩欧美亚洲二区| 日韩有码中文字幕| 国产无遮挡羞羞视频在线观看| 69av精品久久久久久 | 欧美日韩亚洲综合一区二区三区_| 欧美久久黑人一区二区| 亚洲国产看品久久| 精品久久久久久电影网| 一夜夜www| 久久午夜综合久久蜜桃| av在线播放免费不卡| 国产区一区二久久| 男人舔女人的私密视频| 亚洲熟女精品中文字幕| 麻豆乱淫一区二区| 欧美在线一区亚洲| 成人永久免费在线观看视频 | 热re99久久国产66热| 久久久久精品人妻al黑| 国产人伦9x9x在线观看| 精品国产一区二区三区四区第35| 免费不卡黄色视频| 国产欧美日韩一区二区三区在线| 香蕉国产在线看| 国产精品免费视频内射| 亚洲七黄色美女视频| 欧美日韩国产mv在线观看视频| 亚洲精品国产精品久久久不卡| 丰满迷人的少妇在线观看| 男女无遮挡免费网站观看| 一本—道久久a久久精品蜜桃钙片| 韩国精品一区二区三区| 两性夫妻黄色片| 亚洲欧美精品综合一区二区三区| 男女免费视频国产| 欧美乱码精品一区二区三区| 国产三级黄色录像| 人人妻人人添人人爽欧美一区卜| 亚洲精品一二三| 色尼玛亚洲综合影院| 巨乳人妻的诱惑在线观看| 桃花免费在线播放| 亚洲专区中文字幕在线| 国产熟女午夜一区二区三区| 亚洲中文字幕日韩| 丝袜人妻中文字幕| 黄色成人免费大全| 夜夜爽天天搞| 国产老妇伦熟女老妇高清| 99国产综合亚洲精品| 在线观看舔阴道视频| 人人妻人人澡人人爽人人夜夜| av国产精品久久久久影院| 一本一本久久a久久精品综合妖精| 伦理电影免费视频| 国产av一区二区精品久久| 国产激情久久老熟女| 精品国产一区二区三区四区第35| 国产成人欧美在线观看 | 欧美精品高潮呻吟av久久| 日韩一卡2卡3卡4卡2021年| 日本五十路高清| 国产国语露脸激情在线看| 久久久久久久国产电影| 在线观看免费午夜福利视频| 女人被躁到高潮嗷嗷叫费观| 香蕉国产在线看| 999精品在线视频| 亚洲熟女精品中文字幕| 老汉色∧v一级毛片| 久久性视频一级片| 成人特级黄色片久久久久久久 | 久久人妻av系列| 热99久久久久精品小说推荐| www.熟女人妻精品国产| 考比视频在线观看| 国产精品欧美亚洲77777| av在线播放免费不卡| 亚洲精品国产一区二区精华液| av在线播放免费不卡| 美女扒开内裤让男人捅视频| 亚洲第一av免费看| 亚洲第一欧美日韩一区二区三区 | 亚洲七黄色美女视频| 99国产精品一区二区三区| 久久中文字幕一级| 一区二区三区乱码不卡18| 少妇裸体淫交视频免费看高清 | 日韩一区二区三区影片| 99riav亚洲国产免费| 建设人人有责人人尽责人人享有的| 色精品久久人妻99蜜桃| 在线观看免费视频日本深夜| 成人手机av| 丝袜在线中文字幕| 国产极品粉嫩免费观看在线| 自线自在国产av| e午夜精品久久久久久久| 色在线成人网| 成年女人毛片免费观看观看9 | 亚洲中文字幕日韩| 如日韩欧美国产精品一区二区三区| 老汉色∧v一级毛片| 免费在线观看黄色视频的| 久久久久久久国产电影| 一区二区av电影网| 免费看a级黄色片| 久久久久国产一级毛片高清牌| 日韩视频在线欧美| 国产精品久久电影中文字幕 | 久久久久久久大尺度免费视频| 在线观看免费午夜福利视频| 人人妻,人人澡人人爽秒播| e午夜精品久久久久久久| 无遮挡黄片免费观看| 2018国产大陆天天弄谢| 亚洲精品久久成人aⅴ小说| 男女高潮啪啪啪动态图| 久久中文看片网| 黄色a级毛片大全视频| 亚洲精品美女久久av网站| 啦啦啦在线免费观看视频4| 啦啦啦 在线观看视频| 亚洲国产欧美网| 久久人妻福利社区极品人妻图片| 757午夜福利合集在线观看| 日韩人妻精品一区2区三区| 成人特级黄色片久久久久久久 | 黄色毛片三级朝国网站| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲国产欧美在线一区| 精品亚洲成国产av| 精品欧美一区二区三区在线| 男女无遮挡免费网站观看| 久久久久精品国产欧美久久久| 高清视频免费观看一区二区| 丁香六月欧美| 香蕉丝袜av| 亚洲午夜精品一区,二区,三区| 色精品久久人妻99蜜桃| 精品国产亚洲在线| 久9热在线精品视频| av天堂在线播放| 老司机福利观看| 一进一出好大好爽视频| 国产精品电影一区二区三区 | 久久精品国产亚洲av高清一级| 麻豆成人av在线观看| 国产欧美日韩一区二区精品| 亚洲国产成人一精品久久久| 菩萨蛮人人尽说江南好唐韦庄| 欧美在线黄色| 最近最新中文字幕大全电影3 | 亚洲国产欧美网| 久久久久久久精品吃奶| 日韩中文字幕欧美一区二区| 亚洲一卡2卡3卡4卡5卡精品中文| 新久久久久国产一级毛片| 国产激情久久老熟女| 美女高潮喷水抽搐中文字幕| 一区二区三区国产精品乱码| 久久99一区二区三区| 成人特级黄色片久久久久久久 | 亚洲av片天天在线观看| 午夜福利免费观看在线| 少妇猛男粗大的猛烈进出视频| 一级毛片女人18水好多| 黑人巨大精品欧美一区二区蜜桃| 侵犯人妻中文字幕一二三四区| 国产在线视频一区二区| 老司机福利观看| 一个人免费看片子|