• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Diverse Deep Matrix Factorization With Hypergraph Regularization for Multi-View Data Representation

    2023-10-21 03:09:48HaonanHuangGuoxuZhouNaiyaoLiangQibinZhaoSeniorandShengliXie
    IEEE/CAA Journal of Automatica Sinica 2023年11期

    Haonan Huang,,, Guoxu Zhou,,, Naiyao Liang,Qibin Zhao, Senior,, and Shengli Xie,,

    Abstract—Deep matrix factorization (DMF) has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data (MDR).However, existing multiview DMF methods mainly explore the consistency of multi-view data, while neglecting the diversity among different views as well as the high-order relationships of data, resulting in the loss of valuable complementary information.In this paper, we design a hypergraph regularized diverse deep matrix factorization(HDDMF) model for multi-view data representation, to jointly utilize multi-view diversity and a high-order manifold in a multilayer factorization framework.A novel diversity enhancement term is designed to exploit the structural complementarity between different views of data.Hypergraph regularization is utilized to preserve the high-order geometry structure of data in each view.An efficient iterative optimization algorithm is developed to solve the proposed model with theoretical convergence analysis.Experimental results on five real-world data sets demonstrate that the proposed method significantly outperforms stateof-the-art multi-view learning approaches.

    I.INTRODUCTION

    REAL-WORLD data usually can be described from multiple views or collected from various sources.For example,the same image can be represented by its color, texture, and edge; the same news is reported by different institutions.These heterogeneous features described by different data views are called multi-view data [1].Multi-view data learning has become one of the research hotspots in machine learning because it provides rich and insightful information of data to many real-world applications, such as bioinformatics [2], face recognition [3], and document mining [4].

    In the last decade, massive approaches for multi-view data representation (MDR) have been proposed.When dealing with multi-view data, compared with traditional single-view methods which just concatenate multiple types of features in a big matrix, multi-view methods aim to systematically embed the rich information and multi-way interactions into the learning process [5], [6].Classical data representation methods include self-representation [7], spectral clustering [8], nonnegative matrix factorization (NMF), sparse coding [9], [10]and tensor factorization [11], [12].For instance, Gaoet al.[13] proposed a self-representation method with a common indicator matrix that guarantees the consistency of the clustering structure among different views.In [14], a tensor regularized self-representation [15], [16] is introduced to ensure low redundancy and explore the high order correlations underlying multiple views.The work in [8] develops a co-regularized spectral clustering method to obtain a consistent clustering result.In [17], Xuet al.added tensor nuclear norm minimization [18] on indicator matrices to control the consistency among different views.However, both self-representation methods and spectral clustering methods need to construct the symmetric affinity matrix, which makes it difficult for them to work on large-scale data sets.

    Recently, NMF based methods have attracted extensive attention in MDR.Because this method has the ability to use the low-dimensional parts-based representation matrix, it can further improve the accuracy and scalability of clustering tasks [19], [20].Aiming to keep clustering solutions more comparable, Liuet al.[21] proposed a MultiNMF which constructs a consensus term to learn a common matrix across different views.In [22], a partially shared NMF method is presented to simultaneously consider the characteristics of multiview data (consistency and complementarity).Yanget al.[23]designed a uniform distribution multi-view NMF model to reduce distribution divergences between different views by jointly learning a latent consensus matrix.Although the above NMF-based methods often achieve promising clustering performance under certain conditions [24], they work in a onelayer formulation, which can not capture complex hierarchical information and implicit low-level hidden attributes contained in the original data.

    Fig.1.Illustration of the work flow of the proposed HDDMF.

    Inspired by the advances of deep learning methods [25],Trigeorgiset al.[26] proposed a novel deep matrix factorization (DMF) model to learn hidden representations that allow themselves to utilize clustering interpretation according to unknown attributes of input data.Compared with the traditional NMF based method, DMFs have stronger data representation ability [27], which is favored by researchers and quickly extended to various scenarios: including community detection[28], remote sensing [29] and so on.Following this, Zhaoet al.[30] extended the one-view DMF to a multi-view version (MDMF) by directly fixing the common one-side factor among multiple views.A parameter-free MDMF is proposed in [31] to simplify the model structure and reduces the complexity.A partially shared deep matrix factorization model is proposed [32] to respect the consensus information and viewspecific features with partial label information.Moreover, the multi-layer decomposition technique is also applied to improve the ability of representation in other traditional shallow decomposition models.The method [33] based on concept factorization is developed to catch comprehensive multiview information.A novel deep multi-view concept learning method is presented [34] to model consistent and complementary information in a semi-supervised way.In [35], the authors designed a novel robust auto-weighted deep k-means multiview model which directly assigns the partition result.Recently, Huanget al.[36] presented a deep autoencoder-like NMF method to find a compact multi-view representation that considers complementary and consistent information simultaneously.

    Motivation:Note that these DMF-based MDR methods only emphasize consensus among multiple views and ignore the diversity attribute, resulting in the loss of mutually complementary information in each indistinct view that influences performance.We consider introducing diversity constraints to ensure that the representation of each view has distinct information as much as possible, so as to discover the structural complementarity across different views.Some works have also pointed out the importance of diversity in multi-view learning [37], [38].On the other hand, existing methods usually fail to preserve the local manifold structure or only consider pairwise connectivity (e.g., MDMF [30]).In real-world applications, the relationship between data points should be more complex than simple pairwise.If this complex relationship is simply compressed into a pairwise relationship, it will inevitably lead to the loss of valuable information for learning tasks.Some researchers have also shown the advantages of high-order geometrical regularization (namely hypergraph regularization) in data representation [39], [40].

    In this article, to address the above concerns, we propose a hypergraph regularized diverse deep matrix factorizationbased MDR method (HDDMF).As shown in Fig.1, each view data matrixXv(superscriptvdenotes thev-th view) is decomposed intombasis matrices(subscriptidenotes thei-th layer) denoting the multiple-layer factorization and one representation matrix.The diversity enhancement constraints are imposed on the final low-dimensional representation.As shown in the red box in Fig.1, if two samples are similar in the 1st view’s subspace, HDDMF enforces them to be complementary in theV-th view’s subspace.This approach ensures that diverse information among multiple views can be captured and more comprehensive learning can be achieved.By introducing the hypergraph embedding regularization, HDDMF preserves the high-order geometrical structure embedded in the high-dimensional feature space to explicitly model view-specific features.The hypergraph regularization and diversity constraint can play a good complementary balance; the hypergraph regular term can prevent the loss of internal geometric manifold caused by excessive diversity constraints, and help distinguish the representations of learning from different views and achieve more comprehensive learning.

    The main contributions of HDDMF are summarized as follows.

    1) Under the assumption of diverse information among multiple views of data, a diversity-enhanced deep matrix factorization-based multi-view representation learning model is established to explore the structural complementarity that exists inter-and intra-views.

    2) Hypergraph regularization is performed to preserve the intrinsic geometrical structure, which can capture a high-order relation of the view-specific data locality and strengthen the model’s representation ability.

    3) We develop an efficient algorithm for optimizing the HDDMF and demonstrate that it decreases the objective function of the HDDMF monotonically and converges to a stationary point.

    The rest of this paper is organized as follows.In Section II,we give a brief introduction of some preliminaries on the NMF and DMF.In Section III, we describe the proposed HDDMF model formally.Section IV-A presents an efficient algorithm to solve the proposed problem, discusses the convergence proof and analyses the time complexity.In Section V, we report the extensive experimental results on five realworld data sets.Finally, Section VI concludes this paper.Table I summarizes the general notations in this article for the reader’s convenience.

    TABLE I NOTATION USED IN THIS PAPER

    II.PRELIMINARIES

    Non-negative matrix factorization (NMF) [41], [42] is designed to analyze the non-negative data.Mathematically,given a data matrixX, NMF aims to approximately decompose it into two non-negative matrices, i.e., a basis matrixZand a low-rank representation matrixH

    Cuiet al.[31] extended NMF to semi-NMF by releasing the non-negative constraints of NMF on input data so that the model can deal with the mix-sign data.Semi-NMF can be considered as a soft version of k-means whereZdenotes cluster centroids andHdenotes cluster indicators for each data point.On the other hand, real-world data sets always consist of complicated and multi-level features.The shallow representation we learned may contain complex structural and hierarchical information.For example, a face image also contains information about posture, expression, clothing and other attributes, which are helpful in identifying the depicted characters.In order to extract a more expressive representation,Trigeorgiset al.[26] extended the semi-NMF to deep matrix factorization (DMF), by decomposing the data matrixXinto multiple factors, to learn the high-level representation

    wheremis the number of layers, basis matricesZ1∈Rq×p1,...,Zm∈Rpm-1×pm, and representation matricesHm∈R+pm×n.In fact, the approximation in (2) corresponds to successive factorizations ofX

    Thus, based on the Frobenius norm, the loss function of DMF can be written as

    where //·//Fis the Frobenius norm.DMF can make up for the deficiency of the shallow NMF method because its multi-layer decomposition can capture the hierarchical structure of data to improve the performance of low-dimensional data representation and clustering.

    In order to tackle the challenge of multi-view data, a general multi-view version of deep matrix factorization can be straightforwardly designed.Let us denote the input data matricesX={X1,X2,...,XV} withVviews, where the objective function can be expressed as

    Because the method mentioned above only considers the specific attribute of each view data and cannot measure the diversity attribute of multi-view data, we call the method nondiverse deep matrix factorization (NdDMF).

    III.HYPERGRAPH REGULARIZED DIVERSITY-ENHANCED DEEP MATRIX FACTORIZATION

    In this section, we expect to find a new deep matrix factorization method, which can respect a high-order intrinsic geometrical structure and simultaneously utilize multi-view diversity information to create an intact final representation matrix.We first detail the two main components: 1) hypergraph function to discover high-order relationships among data; 2) learning to enhance multi-view diversity representation ability.The final objective function and its algorithmic solution are then presented.The proof on the convergence of the algorithm and the analysis of time complexity are included in the last subsections.

    A. Hypergraph Regularization

    We construct a hypergraphG=(V,E,W) to encode highorder relationships in data space.Vdenotes a finite set of vertices,Eis a family of hyperedgeeofVand∪=V.Wis made up ofw(e), which is defined as the weighting function to measure the weight of a hyperedge [43].The incident matrixRwith a size ofV×Eis used to define the relationship between the vertices and the hyperedges, whose entryr(vi,ei)is 1 ifvi∈eiand 0 otherwise.Therefore, the degree of each vertexd(vi) and the degree of a hyperedged(ei) can be calculated as

    Similar to [44], the unnormalized hypergraph matrix can be defined as follows:

    whereDVandDEare d(iag)onal matrices, which all correspond to thed(vi) anddej, respectively.Thus, the hypergraph regularization term can be formulated as

    whereHdenotes the representation matrix andHidenotes thei-th data representation vector.The hypergraph is a generalization of a graph in which a hyperedge can connect any number of vertices, but the previous common graph edges can only represent pairs of vertices.Therefore, constructing hypergraphs rather than common graphs can respect high-order relationships among samples.

    B. Diversity Measurement

    To guarantee diversity between two views, the main idea is to control the orthogonality of data representation in two views.As illustrated in Fig.2(a), let us denote the indicator matrix asQvof thev-th view.To quantify the diversity between two views (vandw), we can minimize the following function [45], [46]:

    For multi-view representation learning, directly constraining the orthogonality of the same sample representation vector from different views has weak interpretability.Because different views represent different heterogeneous features, it is difficult to achieve one-to-one correspondence with the representation column vector position.In addition, the relationships in the interior of the latent features can not be measured by the above DI term.To address the above concerns, we first defineQvas the inner product of thej-th row and thei-th column of the new representation matrixHv, i.e.,Qv=HvT Hv.Along this line, as shown in Fig.2(c), we design the diversity enhancement termDE(·) as follows:

    Based on the property of the trace operation, we can reformulate (12) as a simple quadratic term:

    DEterm ensures the orthogonality of the inner product of the representation matrix from different views.Each column vector ofQmatrix represents the similarity between the sample and other samples, which corresponds to the positions of different views and has strong interpretability.Particularly, if two learned feature pointshiandhjare very similar in thev-th view(i.e.,≈1),weexpectthatthey wouldlearncomplementaryfeaturesinthew-thview (i.e.,≈0).Inconclusion,theDEterm is essentially mining the diversity between sample pairs from different views, and can explore the structural complementary relationship that exists inter- and intra-views.

    Finally, combining (5), (9), and (13), we can formulate the objective function O of our hypergraph regularized diversityenhanced deep matrix factorization as follows:

    Fig.2.Illustration of the proposed diversity measure methods.

    IV.OPTIMIZATION ALGORITHM

    A. The HDDMF Algorithm

    Algorithm 1 HDDMF algorithm{Xv}V Input.Multi-view data , the number of layer m, layer sizes, hyperparameters v=1{pi}m i=1 β,μ{Hvm}V v=1 H?Output.The final representation matrix ,v=1 1: for to V do i=1 2: for to m do(Zvi,Hvi)← (Hvi-1,pi)3: Semi-NMF 4: end for 5: end for 6: while not converged do v=1 7: for to V do Lvh Hvm 8: Compute hypergraph Laplacian matrix from by using (9)i=1 9: for to m do Zvi 10: Update via (17)11: end for Hvm 12: Update via (21)13: end for 14: end while 15: Calculate the average value of all data representations of each view by (6)

    B. Convergence of the Algorithm

    In this section, we prove the convergence of the update rules(17) and (21).

    IfZis an auxiliary function, then O is a nonincreasing function under the updateG=argminG Z(G,G′).Therefore,, we have O(G)=Z(G,G)≥Z(G′,G)≥O(G′).Therefore, as in[48], we construct an appropriate auxiliary functionZwhich satisfies the requirements.

    TABLE II STATISTICS OF DATA SETS USED IN EXPERIMENTS

    As beenprovenbyAppendix,Zissucha functionfor Oand satisfiesthenecessaryconditions.Inaddition,Z(G,G′)isa convex function forGand its global minimum is

    C. Time Complexity Analysis

    V.EXPERIMENTAL RESULTS AND ANALYSIS

    A. Experimental Setup

    The data sets and evaluation measures we used are described as below:

    1)Data Sets: The Prokaryotic data set consists of 551 prokaryotic samples with three view: textual features and two types of genomic representations.

    Caltech101-71https://www.vision.caltech.edu/datasets/[50] is a subset of Caltech101, which consists of 1474 images of 7 widely used categories, i.e., Windsor-Chair, Motorbikes, Dolla-Bill, Snoopy, Garfield, Stop-Sign, and Face.Following [51], we extracted 6 features, i.e.,Gabor, Wavelet moments, CENTRIST, HOG, GIST and local binary pattern (LBP) features.Their dimensions are 48, 40,254, 1984, 512 and 928, respectively.

    ORL2https://cam-orl.co.uk/facedatabase.htmlconsists of 400 facial images from 40 different individuals.In order to construct the multi-view data sets, similar to [40], we extract three types of features include intensity of dimension 4096, LBP of dimension 3304 and Gabor of dimension 6750.

    Extended YaleB3http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html[52] contains 650 images of 10 human subjects under 65 different poses/illumination conditions.There are three types of features (intensity, LBP, Gabor) of this data set.

    STL104https://www-cs-faculty.stanford.edu/~acoates/stl10/[53] is an image data set including 10 categories,i.e., bird, airplane, cat, truck, car, dog, monkey, ship, deer, and horse.Then we sample 1300 from each category and build intensity, HOG features and LBP features as three views.

    We summarize the important statistical details of the data sets in Table II.

    2)Evaluation Measures: To evaluate our method’s performance, the following metrics are adopted: accuracy (ACC),normalized mutual information (NMI), Purity, adjusted rank index (AR), F-score, Precision, and Recall.Since each metric penalizes different properties in the clustering, we report all metrics results for a comprehensive evaluation.

    B. Compared Methods

    Next, we compare our proposed method with the following state-of-the-art clustering methods:

    1)Multi-View NMF(MultiNMF)[21]: This is a classical multi-view NMF method, which constructs a joint consensus matrix learning process and obtains meaningful and comparable clustering results.We set the only parameter λ to0.01 according to the recommendation of the original paper.

    2)Locality Preserved Diverse NMF(LP-DiNMF)[38]:This is a shallow NMF method, which maintains the local geometry structure and diversity across multiple views simultaneously.We search for the parameters from{0.0001,0.001,0.01,0.1,1,10,100,1000}as suggested.

    3)NMF With Co-Orthogonal Constraints(NMFCC)[54]:Based on LP-DiNMF, NMFCC additionally imposes orthogonal constraints on the learned basis matrices and the representation matrices.We empirically set the parameter{α, β, μ, γ}to { 0.01, 0.1, 0.1, 0.1} for all data sets as the authors advised.

    4)2CMV[55]: A recently proposed factorization-based multi-view model using CMF and NMF, which could exploit the consensus and complementary information for multi-view data.We set the parameters {λ,δ,} to {70,1} as the paper recommended.

    5)Multi-View Deep Matrix Factorization(MDMF)[30]:This approach extends the deep semi-NMF from single-view to multi-view by fixing the shared one-sided final representation among multiple views.Two hyper-parameters γ and β are set to 0.5 and 0.1, as the authors recommended.

    6)Self-Weighted Multi-View DMF(SMDMF)[31]:SMDMF is a hyper-parameters-free version of MDMF, which can obtain an appropriate weight for each view automatically.

    7)Partially Shared DMF(PSDMF)[32]: recently proposed a partially shared structure which can discover viewspecific and common features among different views.The parameters μ is set to 0.1 and β is set to 0 in order to carry out the experiment of the unsupervised scene.

    8)Non-Diverse DMF(NdDMF): NdDMF is implemented by using deep semi-NMF [26] (as shown in (5)) for each view,and then clustering the combination of final representation through spectral clustering.We also conduct a hypergraphregularized version (called HNdDMF) to investigate the effectiveness of manifold regularization.

    9)Our Methods: We carry out two versions of hypergraph regularized diversity-induced DMF.The first version is combing the common diversity constraint as described in (11)(called HDDMF-DI).The second version is HDDMF which contains the diversity enhancement technique as described in(13).Then we apply spectral clustering on the learned final representation to obtain the clustering results.

    It should be noted that LP-DiNMF, NMFCC, MDMF, HNd-DMF, HDDMF-DI and HDDMF all need to construct the Laplacian graph matrix by using the k-nearest-neighbor (k-NN) where the parameterkis set to the number of data categories, as suggested in [38].Our source code is available at https://github.com/libertyhhn/DiverseDMF.

    C. Parametric Sensitivity

    1)Influence of the Number of Layers: To investigate the influence of model depth on clustering results, we applied HDDMF method with depth varying from 1-layer to 4-layer,and the layers sizes are set as [50], [50 100], [50 100 150] and[50 100 150 200].The clustering results with different numbers of layers are shown in Fig.3.Although different data sets have different performance, no matter in ACC or NMI, we can observe that the clustering performance of the multi-layer model is better than the single-layer model on all five data sets.This verifies that the multi-layer model can explore the implicit hierarchical information, which is beneficial for clustering.With the increase of the number of layers, the clustering effect of the model may decline (e.g., Prokaryotic),because the model has been in the state of over-fitting.Therefore, we adopt a suitable layer structure for each data set to carry out the subsequent experiments.Specifically, we configure a 2-layer structure for the Prokaryotic, Extended YaleB and STL10 data sets, a 3-layer structure for ORL, and a 4-layer structure for Caltech101-7.

    Fig.3.The clustering results ACC and NMI with different number of layers on five data sets.

    2)Influence of the Manifold Regularization and Diversity Constraint: To analyze the influence of modules in (14), we focus on two important parameters β and μ.β controls the contribution of the hypergraph regularization in the learned final representation matrices.μ measures the degree of diversity in representation among different views.According to the grid search strategy, they are both chosen from the same range[0.0001,0.001,0.01,0.1,1].In Fig.4, we show the experiment results of parameter tuning, in terms of ACC and NMI on the Prokaryotic, Caltech101-7, ORL, Extended YaleB and STL10 data sets, respectively.From the figure, we can observe that when β is set to a relatively large value and μ is set to a relatively small value, HDDMF can achieve the best results in most cases.

    D. Performance Comparison

    In order to make a fair comparison among all competitors,we directly use the source code provided by the authors for experimental verification, and search for the best parameters according to the suggestions of the original papers.All programs run within MATLAB (R2018b) on a server with Intel(R) Xeon(R) E5-2640 @2.40 GHz CPU and 128 GB RAM, under the Linux operating system.For each method, we run it with ten initializations and record the results’ mean and standard variance, referring to the experimental setup in [30].The clustering performance comparison on five multi-view data sets are reported in Tables III-VII.For all these metrics,a higher value denotes better clustering performance and the highest values are in boldface.Note that, the Multi-NMF, LPDiNMF, NMFCC and 2CMV all require that only non-negative data can be processed.Thus, they can not handle data sets with negative pixels (e.g., Prokaryotic and Caltech101-7).And results of above non-negative methods on Prokaryotic and Caltech101-7 are not available.From these tables, we can make the following observations:

    Fig.4.ACC and NMI changes with the alterations of μ and β on data sets.

    1) In general, the proposed HDDMF achieves the best results on all data sets, except the NMI and AR metrics on the Prokaryotic data set.Take the STL10 data set as an example;our method improves around 1.6%, 2.3%, 2.43%, 1.78%,1.33%, 1.35% and 2.73% with respect to seven metrics over the second-best method MDMF.This is mostly because our proposed method uses three aspects in one unified model:a) structural complementarity of representation from different views; b) high-order relationship among samples; c) deep representation to discover hierarchical information.

    2) The deep matrix factorization-based MDR methods(MDMF, SMDMF, PSDMF, and the proposed HDDMF(-DI))show better results than single-layer matrix factorizationbased MDR methods (MultiNMF, LP-DiNMF, NMFCC and 2CMV) in most cases.The reason may be that through depth factorization, the model can eliminate some adverse factors and keep identity information in the final representation.

    3) It can be seen that the clustering performance of the models with diversity constraints is much better than that without diversity constraints.Take the ORL data set as an example,compared with the non-diverse method HNdDMF, the methods with diversity constraints (HDDMF-DI and HDDMF)improve performance more than 4% in terms of all metrics.This indicates that diversity-induced techniques could discover mutually complementary information among multiple views and are more conducive to clustering.

    4) It is clear to see that the HDDMF outperforms the HDDMF-DI in all data sets.Take Caltech101-7 data as an example; in terms of ACC and NMI, the leading margins of HDDMF are about 3% and 7% over HDDMF-DI, respectively.This indicates that, exploiting diverse attributes between samples and views at the same time can help to further improve the representation ability of the model to achieve accurate learning.

    5) On the Extended YaleB data, HNdDMF achieves better performance than NdDMF.The reason is that the high-order manifold regularization can preserve the local geometrical structure of original data in the subspace learned by the model.

    E. The Convergence of the Algorithm

    The HDDMF objective function is solved by the proposed iterative optimization method (Algorithm 1).We have theoretically proven its convergence property in Section IV-B.To experimentally show the convergence of the HDDMF, we record the objective value (14) in each iteration.The convergence curves on the ORL, Caltech101-7, Extended YaleB and STL10 data sets are shown in Fig.5.We can observe that the objective function value drops sharply and gradually meets the convergence status after about 500 iterations.

    F. Visualizations for Embedding Results

    Visualizations for embedding results in the Caltech101-7,Extend YaleB, and ORL data sets are demonstrated in Figs.6-8, respectively.Here, the learned embedding representation matrices are projected onto a two-dimensional subspace using t-SNE [56].Note that we only compare the DMF-based methods in Caltech101-7 because there are negative pixels.We then directly concatenate the features from different views to the original data space.It can be observed that our proposed HDDMF has a clearer cluster structure compared to the original data space, the two NMF-based methods, and the other two DMF-based methods.

    VI.CONCLUSION

    In this article, we propose a novel deep multi-view data representation model HDDMF by minimizing the orthogonality quantification term to capture the diversity of the underlying representation structure, and integrating the hypergraph regularization to preserve the local manifold structure embedded in a high-dimensional latent space.To solve the optimization problem of our method, a new algorithm is designed and the theoretical analysis of convergence property is provided,where the proposed algorithm converges to a stationary point of the objective function.Extensive experimental results on five real databases show that the proposed methods outperform state-of-the-art multi-view learning approaches on the clustering challenge.Considering that there are nonlinear rela-tionships usually hidden in multi-view data [57], in future works, we will extend HDDMF to nonlinear version to discover nonlinear information and improve the learning ability of the model.

    TABLE III RESULTS ON THE PROKARYOTIC DATA SET (MEAN ± STANDARD DEVIATION)

    TABLE IV RESULTS ON THE CALTECH101-7 DATA SET (MEAN ± STANDARD DEVIATION)

    TABLE V RESULTS ON THE ORL DATA SET (MEAN ± STANDARD DEVIATION)

    APPENDIX

    To proveZ(G,G′) is an auxiliary function of O (G), we first introduce the Lemma 1 as in [58]:

    Lemma 1: For any non-negative matricesQ∈Rn×n,P∈Rk×k,S∈Rn×k,S′∈Rn×k, whereQandPare symmetric,the following inequality holds:

    From the O (G) (as in (28)) and theZ(G,G′) (as in (30)), we can obtain the following inequality by utilizing the above Lemma 1:

    To obtain lower bounds for the three remaining terms, we

    TABLE VI RESULTS ON THE EXTENDED YALEB DATA SET (MEAN ± STANDARD DEVIATION)

    TABLE VII RESULTS ON THE STL10 DATA SET (MEAN ± STANDARD DEVIATION)

    Fig.5.Iteration number versus the objective value of HDDMF.

    utilize the inequalityz≥1+logz,?z>0, and get

    Thus, from (33) to (38), we haveZ(G,G′)≥O(G), andZ(G,G′)=O(G).

    Next, we take the first derivative ofZ(G,G′) (as in (30)) onGto find the minimum ofZ(G,G′) and get

    Taking the second derivative ofZ(G,G′) onG, we have

    Fig.6.Visualization of the embedding results on the Caltech101-7 data set.

    Fig.7.Visualization of the embedding results on Extend YaleB data set.

    Fig.8.Visualization of the embedding results on ORL data set.

    人妻 亚洲 视频| 国产精品偷伦视频观看了| 成人黄色视频免费在线看| 国产亚洲精品一区二区www | 久久国产精品影院| 欧美亚洲日本最大视频资源| 国产欧美日韩一区二区三| 亚洲欧美一区二区三区黑人| 亚洲免费av在线视频| 在线播放国产精品三级| 少妇 在线观看| 亚洲第一av免费看| 亚洲性夜色夜夜综合| 久9热在线精品视频| 国产精品永久免费网站| 亚洲专区国产一区二区| 丰满的人妻完整版| 叶爱在线成人免费视频播放| 亚洲免费av在线视频| 一级毛片高清免费大全| 黑人巨大精品欧美一区二区mp4| 免费少妇av软件| 精品久久久久久久毛片微露脸| 亚洲人成电影观看| 丰满人妻熟妇乱又伦精品不卡| 中文字幕精品免费在线观看视频| 日本欧美视频一区| 亚洲九九香蕉| 狂野欧美激情性xxxx| 一级毛片高清免费大全| 成人国产一区最新在线观看| 国产成人精品在线电影| x7x7x7水蜜桃| 精品久久久精品久久久| 精品久久久久久,| 90打野战视频偷拍视频| 免费不卡黄色视频| 中文字幕精品免费在线观看视频| 国产真人三级小视频在线观看| 精品久久久久久久毛片微露脸| 免费在线观看黄色视频的| 两个人看的免费小视频| 久久婷婷成人综合色麻豆| 国产精品免费一区二区三区在线 | 成人18禁高潮啪啪吃奶动态图| 手机成人av网站| 搡老岳熟女国产| 成在线人永久免费视频| 久久久国产成人精品二区 | 日韩制服丝袜自拍偷拍| 国产av精品麻豆| 国产精品美女特级片免费视频播放器 | 国产成人av激情在线播放| 淫妇啪啪啪对白视频| 国产99久久九九免费精品| 啦啦啦在线免费观看视频4| 一级毛片高清免费大全| 欧美日韩国产mv在线观看视频| 欧美日韩国产mv在线观看视频| 一本综合久久免费| 久久国产精品影院| 人人妻,人人澡人人爽秒播| av中文乱码字幕在线| 久久国产乱子伦精品免费另类| 天天影视国产精品| 91精品三级在线观看| 色94色欧美一区二区| 精品国产乱码久久久久久男人| 狠狠狠狠99中文字幕| av超薄肉色丝袜交足视频| a在线观看视频网站| 欧美色视频一区免费| 精品久久久久久久久久免费视频 | 久久精品国产a三级三级三级| 国产三级黄色录像| 亚洲第一青青草原| 午夜精品久久久久久毛片777| 黄色怎么调成土黄色| 婷婷成人精品国产| 90打野战视频偷拍视频| 成年女人毛片免费观看观看9 | 成人亚洲精品一区在线观看| 最新在线观看一区二区三区| 美女国产高潮福利片在线看| 久久中文看片网| 成人手机av| 亚洲情色 制服丝袜| 中文亚洲av片在线观看爽 | 黑人操中国人逼视频| 亚洲精品在线观看二区| 久久精品国产亚洲av高清一级| 成熟少妇高潮喷水视频| 一级黄色大片毛片| 一二三四社区在线视频社区8| 久久午夜亚洲精品久久| 中文字幕高清在线视频| 在线视频色国产色| 最近最新免费中文字幕在线| 脱女人内裤的视频| 99久久人妻综合| 午夜福利在线免费观看网站| 亚洲综合色网址| 757午夜福利合集在线观看| 亚洲国产看品久久| 欧美亚洲 丝袜 人妻 在线| 91大片在线观看| 国产亚洲一区二区精品| 精品久久久久久久毛片微露脸| 色婷婷av一区二区三区视频| 曰老女人黄片| 亚洲久久久国产精品| 久9热在线精品视频| 日日摸夜夜添夜夜添小说| 亚洲全国av大片| 老司机影院毛片| 国产精品亚洲av一区麻豆| 两性午夜刺激爽爽歪歪视频在线观看 | 丰满的人妻完整版| 国产精品成人在线| 国产1区2区3区精品| 嫩草影视91久久| 日韩三级视频一区二区三区| 免费在线观看亚洲国产| a级毛片黄视频| 国产野战对白在线观看| 日韩三级视频一区二区三区| 欧美亚洲 丝袜 人妻 在线| 国产精品秋霞免费鲁丝片| 国产男女内射视频| 亚洲欧美一区二区三区久久| 999精品在线视频| 天天躁狠狠躁夜夜躁狠狠躁| 黄色视频不卡| 超色免费av| 国产麻豆69| 美女扒开内裤让男人捅视频| 午夜福利影视在线免费观看| 一进一出抽搐gif免费好疼 | 欧美激情极品国产一区二区三区| 久久精品成人免费网站| 一级片'在线观看视频| 黄色a级毛片大全视频| 欧美日韩国产mv在线观看视频| 少妇 在线观看| 人人妻,人人澡人人爽秒播| 国产免费av片在线观看野外av| 欧美日韩中文字幕国产精品一区二区三区 | 高潮久久久久久久久久久不卡| 久久久久久人人人人人| 狠狠狠狠99中文字幕| 国内久久婷婷六月综合欲色啪| 亚洲中文字幕日韩| 亚洲九九香蕉| 9色porny在线观看| 国产在线观看jvid| 久久香蕉激情| 欧美性长视频在线观看| 一本一本久久a久久精品综合妖精| 操出白浆在线播放| 欧美国产精品va在线观看不卡| 在线观看免费视频网站a站| 高清毛片免费观看视频网站 | 超碰成人久久| 国产不卡av网站在线观看| 亚洲欧美一区二区三区黑人| 国内久久婷婷六月综合欲色啪| 精品国产超薄肉色丝袜足j| 亚洲中文av在线| 亚洲精品乱久久久久久| 丰满饥渴人妻一区二区三| 国产精品九九99| 两人在一起打扑克的视频| 精品福利观看| 亚洲精品美女久久久久99蜜臀| 三级毛片av免费| 热99国产精品久久久久久7| 国产av一区二区精品久久| 欧美精品一区二区免费开放| 国产精品久久久久成人av| 80岁老熟妇乱子伦牲交| 国产麻豆69| 老熟妇乱子伦视频在线观看| 欧美激情极品国产一区二区三区| 夫妻午夜视频| 国产精品影院久久| 黄片大片在线免费观看| 国产精品久久久久成人av| 1024香蕉在线观看| 十分钟在线观看高清视频www| 黄色成人免费大全| 日韩视频一区二区在线观看| 伦理电影免费视频| 日本五十路高清| cao死你这个sao货| 真人做人爱边吃奶动态| 国产精品久久久人人做人人爽| 桃红色精品国产亚洲av| av不卡在线播放| 久久精品国产清高在天天线| 国产成人一区二区三区免费视频网站| 中文亚洲av片在线观看爽 | 欧美精品亚洲一区二区| 国产免费现黄频在线看| 国产成人啪精品午夜网站| 黄色a级毛片大全视频| 日本五十路高清| 亚洲精品自拍成人| 黑人操中国人逼视频| 午夜福利一区二区在线看| 欧美色视频一区免费| 高清欧美精品videossex| 国产91精品成人一区二区三区| 国产成人av教育| 成年人午夜在线观看视频| 日本精品一区二区三区蜜桃| 一级片'在线观看视频| 悠悠久久av| 18禁美女被吸乳视频| 久久影院123| 久久精品国产a三级三级三级| 母亲3免费完整高清在线观看| 黄片播放在线免费| 精品一区二区三卡| 在线观看免费午夜福利视频| 久热爱精品视频在线9| 日韩熟女老妇一区二区性免费视频| 大码成人一级视频| 亚洲 国产 在线| 母亲3免费完整高清在线观看| 日日摸夜夜添夜夜添小说| 精品视频人人做人人爽| 很黄的视频免费| 精品福利观看| 香蕉丝袜av| 淫妇啪啪啪对白视频| 国产精品久久久av美女十八| 视频区图区小说| 国产精品免费视频内射| 精品免费久久久久久久清纯 | 国产精品自产拍在线观看55亚洲 | 亚洲熟妇熟女久久| 丰满人妻熟妇乱又伦精品不卡| 乱人伦中国视频| 中亚洲国语对白在线视频| 热re99久久国产66热| 黄色女人牲交| 精品午夜福利视频在线观看一区| 嫩草影视91久久| 最新的欧美精品一区二区| 99久久99久久久精品蜜桃| 国产蜜桃级精品一区二区三区 | av在线播放免费不卡| 国产欧美日韩综合在线一区二区| 亚洲精品在线美女| 国产成人精品久久二区二区91| 国产精品电影一区二区三区 | 久久青草综合色| 日韩 欧美 亚洲 中文字幕| 操出白浆在线播放| 超碰97精品在线观看| 69av精品久久久久久| 黄色毛片三级朝国网站| 深夜精品福利| 亚洲精品国产色婷婷电影| 精品人妻1区二区| 青草久久国产| 黄色成人免费大全| 99国产精品一区二区蜜桃av | 久久久久久久精品吃奶| 国产在视频线精品| 一本一本久久a久久精品综合妖精| 久久久久国产一级毛片高清牌| 每晚都被弄得嗷嗷叫到高潮| 国产精华一区二区三区| 国产精品免费视频内射| 久久久久久久精品吃奶| 免费在线观看视频国产中文字幕亚洲| 免费高清在线观看日韩| 99国产精品99久久久久| 亚洲熟妇熟女久久| 人人澡人人妻人| 一a级毛片在线观看| 青草久久国产| 男男h啪啪无遮挡| 无限看片的www在线观看| 免费在线观看影片大全网站| 亚洲成av片中文字幕在线观看| 亚洲人成电影免费在线| 久久 成人 亚洲| 国产亚洲精品第一综合不卡| 免费女性裸体啪啪无遮挡网站| 免费久久久久久久精品成人欧美视频| 又紧又爽又黄一区二区| 在线免费观看的www视频| 亚洲一卡2卡3卡4卡5卡精品中文| 男男h啪啪无遮挡| 国产在线观看jvid| 1024视频免费在线观看| 亚洲情色 制服丝袜| 亚洲专区中文字幕在线| 极品教师在线免费播放| 国产亚洲欧美精品永久| 少妇 在线观看| 精品人妻熟女毛片av久久网站| 久久这里只有精品19| 757午夜福利合集在线观看| 亚洲精品一二三| 精品熟女少妇八av免费久了| 无限看片的www在线观看| 91麻豆精品激情在线观看国产 | 亚洲色图 男人天堂 中文字幕| 99精品在免费线老司机午夜| 1024香蕉在线观看| 国产精品永久免费网站| tube8黄色片| 日韩大码丰满熟妇| 亚洲欧洲精品一区二区精品久久久| 欧美乱色亚洲激情| 精品人妻1区二区| 免费观看人在逋| 国产在线精品亚洲第一网站| 亚洲国产欧美网| a级毛片黄视频| av线在线观看网站| 丰满的人妻完整版| 一夜夜www| 国产成人影院久久av| 18禁裸乳无遮挡动漫免费视频| 精品国产乱子伦一区二区三区| 下体分泌物呈黄色| 亚洲av成人不卡在线观看播放网| 国产一卡二卡三卡精品| 51午夜福利影视在线观看| 日本wwww免费看| 亚洲在线自拍视频| 亚洲专区国产一区二区| 久久精品aⅴ一区二区三区四区| 国产精品久久视频播放| 看黄色毛片网站| 美女午夜性视频免费| 18禁观看日本| 亚洲成人免费av在线播放| 亚洲精品久久成人aⅴ小说| 日韩人妻精品一区2区三区| 少妇猛男粗大的猛烈进出视频| 99国产极品粉嫩在线观看| 露出奶头的视频| 国产亚洲欧美精品永久| av线在线观看网站| 日韩大码丰满熟妇| 亚洲专区字幕在线| 天堂√8在线中文| 国产精品免费大片| 精品电影一区二区在线| 久久天躁狠狠躁夜夜2o2o| 亚洲 欧美一区二区三区| 亚洲少妇的诱惑av| 大型av网站在线播放| 亚洲欧美日韩高清在线视频| 国产一区有黄有色的免费视频| 中国美女看黄片| 国产1区2区3区精品| 国产不卡一卡二| 日韩一卡2卡3卡4卡2021年| 丰满饥渴人妻一区二区三| 亚洲av日韩在线播放| 在线观看免费高清a一片| 动漫黄色视频在线观看| 欧美黄色片欧美黄色片| 国产欧美日韩一区二区精品| 女人爽到高潮嗷嗷叫在线视频| 久久久精品免费免费高清| 这个男人来自地球电影免费观看| 国产高清videossex| 亚洲av成人不卡在线观看播放网| 精品国产乱码久久久久久男人| aaaaa片日本免费| 欧美激情久久久久久爽电影 | 亚洲五月天丁香| 免费一级毛片在线播放高清视频 | 久久久久视频综合| 身体一侧抽搐| 亚洲欧美一区二区三区黑人| 制服人妻中文乱码| 国产一区二区三区视频了| 18在线观看网站| 国产亚洲精品一区二区www | av一本久久久久| 丝袜在线中文字幕| 日韩大码丰满熟妇| 天天躁日日躁夜夜躁夜夜| a级片在线免费高清观看视频| 热99re8久久精品国产| 国产精品99久久99久久久不卡| 国产有黄有色有爽视频| 91老司机精品| 乱人伦中国视频| 亚洲精品自拍成人| 精品久久久精品久久久| 一个人免费在线观看的高清视频| √禁漫天堂资源中文www| 国产日韩欧美亚洲二区| 在线观看免费日韩欧美大片| 中文字幕av电影在线播放| 午夜福利,免费看| 久久久精品国产亚洲av高清涩受| 水蜜桃什么品种好| 国产午夜精品久久久久久| 午夜成年电影在线免费观看| 亚洲国产毛片av蜜桃av| 后天国语完整版免费观看| 美国免费a级毛片| 美女午夜性视频免费| 亚洲精品乱久久久久久| 少妇粗大呻吟视频| 中亚洲国语对白在线视频| 99久久99久久久精品蜜桃| 日韩欧美一区二区三区在线观看 | 在线观看舔阴道视频| 亚洲视频免费观看视频| 国产97色在线日韩免费| 婷婷成人精品国产| 国产精品 国内视频| 一区二区三区国产精品乱码| 午夜福利乱码中文字幕| 久久久国产一区二区| 热re99久久精品国产66热6| av中文乱码字幕在线| 亚洲人成77777在线视频| 国产成人av激情在线播放| 午夜福利欧美成人| 国产亚洲欧美98| 日韩免费av在线播放| 桃红色精品国产亚洲av| 欧美不卡视频在线免费观看 | 欧美日韩亚洲高清精品| 亚洲一区二区三区欧美精品| 一本大道久久a久久精品| 国产亚洲欧美在线一区二区| 国产精品一区二区免费欧美| 国产淫语在线视频| 女人精品久久久久毛片| 国产亚洲精品久久久久久毛片 | tocl精华| 在线观看免费视频网站a站| av网站免费在线观看视频| 啦啦啦视频在线资源免费观看| 国产不卡av网站在线观看| 成人三级做爰电影| 久久久精品区二区三区| 欧美激情高清一区二区三区| 精品一区二区三区av网在线观看| 日本欧美视频一区| 亚洲人成电影观看| 国产精品久久久久成人av| 国产精品.久久久| 国产aⅴ精品一区二区三区波| 精品一区二区三区视频在线观看免费 | 村上凉子中文字幕在线| 男女下面插进去视频免费观看| 久久久久精品国产欧美久久久| 亚洲三区欧美一区| 国产91精品成人一区二区三区| 国产亚洲精品久久久久5区| 亚洲精品乱久久久久久| 精品高清国产在线一区| 自线自在国产av| 久久精品人人爽人人爽视色| 极品人妻少妇av视频| 色综合婷婷激情| 精品久久久久久,| 天天操日日干夜夜撸| 亚洲精品国产色婷婷电影| 欧美 日韩 精品 国产| 亚洲性夜色夜夜综合| 黄色丝袜av网址大全| 一区福利在线观看| 精品久久蜜臀av无| 国产日韩欧美亚洲二区| 人人妻人人澡人人看| 每晚都被弄得嗷嗷叫到高潮| 在线观看免费午夜福利视频| 欧美成人午夜精品| 亚洲黑人精品在线| 国产精品乱码一区二三区的特点 | 久久久久久免费高清国产稀缺| 一个人免费在线观看的高清视频| 中文字幕高清在线视频| 日本黄色日本黄色录像| 国产成人啪精品午夜网站| 国产精品影院久久| 欧美日韩福利视频一区二区| 亚洲精品中文字幕在线视频| 十八禁人妻一区二区| 一边摸一边抽搐一进一出视频| 亚洲aⅴ乱码一区二区在线播放 | 操美女的视频在线观看| 久久婷婷成人综合色麻豆| 欧美乱妇无乱码| 亚洲精品中文字幕在线视频| 香蕉国产在线看| 水蜜桃什么品种好| 在线看a的网站| 777久久人妻少妇嫩草av网站| 1024视频免费在线观看| 国产乱人伦免费视频| 亚洲aⅴ乱码一区二区在线播放 | 国产极品粉嫩免费观看在线| 老熟妇乱子伦视频在线观看| 亚洲国产精品一区二区三区在线| 丝袜美足系列| 国产一区二区激情短视频| 亚洲 欧美一区二区三区| 91成年电影在线观看| 久久人妻av系列| 久久中文看片网| 久久ye,这里只有精品| 国产野战对白在线观看| 国产欧美日韩综合在线一区二区| 亚洲精品在线美女| 一区二区三区精品91| 婷婷成人精品国产| 女警被强在线播放| 欧美 日韩 精品 国产| 别揉我奶头~嗯~啊~动态视频| 精品国产亚洲在线| 久久午夜亚洲精品久久| 妹子高潮喷水视频| 国产成人影院久久av| 国产精品98久久久久久宅男小说| 18在线观看网站| 国产精品98久久久久久宅男小说| 99国产精品一区二区蜜桃av | 嫁个100分男人电影在线观看| 热re99久久精品国产66热6| 中国美女看黄片| 亚洲中文字幕日韩| 一边摸一边抽搐一进一小说 | 天天添夜夜摸| 美女福利国产在线| 美女高潮喷水抽搐中文字幕| 久久人人爽av亚洲精品天堂| 久久精品国产亚洲av香蕉五月 | 在线视频色国产色| 欧美丝袜亚洲另类 | 中文亚洲av片在线观看爽 | 欧美人与性动交α欧美软件| 精品福利永久在线观看| 国产精品亚洲一级av第二区| 十八禁人妻一区二区| 久久香蕉精品热| 成年人黄色毛片网站| 欧美亚洲日本最大视频资源| 一区在线观看完整版| 女人被躁到高潮嗷嗷叫费观| 亚洲五月婷婷丁香| 国产亚洲一区二区精品| 午夜福利在线观看吧| 亚洲五月色婷婷综合| a级毛片在线看网站| 亚洲熟女毛片儿| cao死你这个sao货| 侵犯人妻中文字幕一二三四区| cao死你这个sao货| 欧美日韩瑟瑟在线播放| 1024视频免费在线观看| 午夜视频精品福利| 亚洲成国产人片在线观看| 国产高清激情床上av| 国产高清国产精品国产三级| 国产精品二区激情视频| 精品国内亚洲2022精品成人 | 51午夜福利影视在线观看| 成年人午夜在线观看视频| 十八禁人妻一区二区| 久久久久久人人人人人| 欧美色视频一区免费| 日本五十路高清| www.熟女人妻精品国产| 啦啦啦 在线观看视频| 好看av亚洲va欧美ⅴa在| 天堂中文最新版在线下载| 岛国在线观看网站| 亚洲五月天丁香| 亚洲成a人片在线一区二区| 婷婷精品国产亚洲av在线 | ponron亚洲| 中出人妻视频一区二区| 一区二区日韩欧美中文字幕| 看免费av毛片| 国产伦人伦偷精品视频| 侵犯人妻中文字幕一二三四区| 一区在线观看完整版| 国产日韩欧美亚洲二区| 正在播放国产对白刺激| 美女高潮到喷水免费观看| 亚洲国产精品合色在线| 超色免费av| 中文字幕制服av| 18禁裸乳无遮挡动漫免费视频| 国产淫语在线视频| 国产人伦9x9x在线观看| 美女高潮喷水抽搐中文字幕| 日日爽夜夜爽网站| 日韩熟女老妇一区二区性免费视频| av电影中文网址| 99国产精品一区二区蜜桃av | 欧美日韩福利视频一区二区| 国产一区二区三区视频了| bbb黄色大片| videos熟女内射| 成熟少妇高潮喷水视频| 高清视频免费观看一区二区| 日日摸夜夜添夜夜添小说| 国产男女超爽视频在线观看|