Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Multiview attributed graph clustering is an important approach to partition multiview data based on the attribute characteristics and adjacent matrices from different views. Some attempts have been made in using graph neural network (GNN), which have achieved promising clustering performance. Despite this, few of them pay attention to the inherent specific information embedded in multiple views. Meanwhile, they are incapable of recovering the latent high-level representation from the low-level ones, greatly limiting the downstream clustering performance. To fill these gaps, a novel dual information enhanced multiview attributed graph clustering (DIAGC) method is proposed in this article. Specifically, the proposed method introduces the specific information reconstruction (SIR) module to disentangle the explorations of the consensus and specific information from multiple views, which enables graph convolutional network (GCN) to capture the more essential low-level representations. Besides, the contrastive learning (CL) module maximizes the agreement between the latent high-level representation and low-level ones and enables the high-level representation to satisfy the desired clustering structure with the help of the self-supervised clustering (SC) module. Extensive experiments on several real-world benchmarks demonstrate the effectiveness of the proposed DIAGC method compared with the state-of-the-art baselines.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2024.3401449DOI Listing

Publication Analysis

Top Keywords

multiview attributed
12
attributed graph
12
graph clustering
12
high-level representation
12
dual enhanced
8
enhanced multiview
8
clustering performance
8
multiple views
8
latent high-level
8
representation low-level
8

Similar Publications

Objective: To develop a multiview fusion framework that effectively identifies suspect keratoconus cases and facilitates the possibility of early clinical intervention.

Design: Retrospective cross-sectional study.

Subjects: A total of 573 corneal topography maps representing eyes classified as normal, suspect, or keratoconus.

View Article and Find Full Text PDF

Recent advances in multimodal and contrastive learning have significantly enhanced image and video retrieval capabilities. This fusion provides numerous opportunities for multi-dimensional and multi-view retrieval, especially in multi-camera surveillance scenarios in traffic environments. This paper introduces a novel Multi-modal Vehicle Retrieval (MVR) system designed to retrieve the trajectories of tracked vehicles using natural language descriptions.

View Article and Find Full Text PDF

Background: Accurately predicting synergistic drug combinations is critical for complex disease therapy. However, the vast search space of potential drug combinations poses significant challenges for identification through biological experiments alone. Nowadays, deep learning is widely applied in this field.

View Article and Find Full Text PDF

Multidrug combination therapy has long been a vital approach for treating complex diseases by leveraging synergistic effects between drugs. However, drug-drug interactions (DDIs) are not uniformly beneficial. Accurate and rapid identification of DDIs is critical to mitigate drug-related side effects.

View Article and Find Full Text PDF

Multi-view clustering (MVC) aims to exploit the latent relationships between heterogeneous samples in an unsupervised manner, which has served as a fundamental task in the unsupervised learning community and has drawn widespread attention. In this work, we propose a new deep multi-view contrastive clustering method via graph structure awareness (DMvCGSA) by conducting both instance-level and cluster-level contrastive learning to exploit the collaborative representations of multi-view samples. Unlike most existing deep multi-view clustering methods, which usually extract only the attribute features for multi-view representation, we first exploit the view-specific features while preserving the latent structural information between multi-view data via a GCN-embedded autoencoder, and further develop a similarity-guided instance-level contrastive learning scheme to make the view-specific features discriminative.

View Article and Find Full Text PDF