A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Graph deep learning (GDL) has demonstrated impressive performance in predicting population-based brain disorders (BDs) through the integration of both imaging and non-imaging data. However, the effectiveness of GDL-based methods heavily depends on the quality of modeling multi-modal population graphs and tends to degrade as the graph scale increases. Moreover, these methods often limit interactions between imaging and non-imaging data to node-edge interactions within the graph, overlooking complex inter-modal correlations and resulting in suboptimal outcomes. To address these challenges, we propose MM-GTUNets, an end-to-end Graph Transformer-based multi-modal graph deep learning (MMGDL) framework designed for large-scale brain disorders prediction. To effectively utilize rich multi-modal disease-related information, we introduce Modality Reward Representation Learning (MRRL), which dynamically constructs population graphs using an Affinity Metric Reward System (AMRS). We also employ a variational autoencoder to reconstruct latent representations of non-imaging features aligned with imaging features. Based on this, we introduce Adaptive Cross-Modal Graph Learning (ACMGL), which captures critical modality-specific and modality-shared features through a unified GTUNet encoder, taking advantages of Graph UNet and Graph Transformer, along with a feature fusion module. We validated our method on two public multi-modal datasets ABIDE and ADHD-200, demonstrating its superior performance in diagnosing BDs. Our code is available at https://github.com/NZWANG/MM-GTUNets.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2025.3556420DOI Listing

Publication Analysis

Top Keywords

graph deep
12
deep learning
12
brain disorders
12
graph
9
multi-modal graph
8
disorders prediction
8
imaging non-imaging
8
non-imaging data
8
population graphs
8
multi-modal
5

Similar Publications