98%
921
2 minutes
20
Nowadays, data in the real world often comes from multiple sources, but most existing multi-view K-Means perform poorly on linearly non-separable data and require initializing the cluster centers and calculating the mean, which causes the results to be unstable and sensitive to outliers. This paper proposes an efficient multi-view K-Means to solve the above-mentioned issues. Specifically, our model avoids the initialization and computation of clusters centroid of data. Additionally, our model use the Butterworth filters function to transform the adjacency matrix into a distance matrix, which makes the model is capable of handling linearly inseparable data and insensitive to outliers. To exploit the consistency and complementarity across multiple views, our model constructs a third tensor composed of discrete index matrices of different views and minimizes the tensor's rank by tensor Schatten p-norm. Experiments on two artificial datasets verify the superiority of our model on linearly inseparable data, and experiments on several benchmark datasets illustrate the performance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2023.3340609 | DOI Listing |
PLoS One
June 2025
School of Computer and Software Engineering, Shenzhen Institute of Information Technology, Shenzhen, China.
Multiview clustering aims to improve clustering performance by exploring multiple representations of data and has become an important research direction. Meanwhile, graph-based methods have been extensively studied and have shown promising performance in multiview clustering tasks. However, most existing graph-based multiview clustering methods rely on assigning appropriate weights to each view based on its importance, with the clustering results depending on these weight assignments.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
September 2025
Simple multiple kernel k-means (SMKKM) introduces a new minimization-maximization learning paradigm for multi-view clustering and makes remarkable achievements in some applications. As one of its variants, localized SMKKM (LSMKKM) is recently proposed to capture the variation among samples, focusing on reliable pairwise samples, which should keep together and cut off unreliable, farther pairwise ones. Though demonstrating effectiveness, we observe that LSMKKM indiscriminately utilizes the variation of each sample, resulting in unsatisfying clustering performance.
View Article and Find Full Text PDFAlthough numerous clustering algorithms have been developed, many existing methods still rely on the K-means technique to identify clusters of data points. However, the performance of K-means is highly dependent on the accurate estimation of cluster centers, which is challenging to achieve optimally. Furthermore, it struggles to handle linearly non-separable data.
View Article and Find Full Text PDFThe increasing effect of Internet of Things (IoT) unlocks the massive volume of the availability of Big Data in many fields. Generally, these Big Data may be in a non-independently and identically distributed fashion (non-IID). In this paper, we have contributions in such a way enable multi-view k-means (MVKM) clustering to maintain the privacy of each database by allowing MVKM to be operated on the local principle of clients' multi-view data.
View Article and Find Full Text PDFNeural Netw
April 2025
School of Mathematical Sciences, Harbin Engineering University, Harbin 150001, China.
Multi-view clustering has garnered significant attention due to its capacity to utilize information from multiple perspectives. The concept of anchor graph-based techniques was introduced to manage large-scale data better. However, current methods rely on K-means or uniform sampling to select anchors in the original space.
View Article and Find Full Text PDF