Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Graph deep learning (GDL) has demonstrated impressive performance in predicting population-based brain disorders (BDs) through the integration of both imaging and non-imaging data. However, the effectiveness of GDL-based methods heavily depends on the quality of modeling multi-modal population graphs and tends to degrade as the graph scale increases. Moreover, these methods often limit interactions between imaging and non-imaging data to node-edge interactions within the graph, overlooking complex inter-modal correlations and resulting in suboptimal outcomes. To address these challenges, we propose MM-GTUNets, an end-to-end Graph Transformer-based multi-modal graph deep learning (MMGDL) framework designed for large-scale brain disorders prediction. To effectively utilize rich multi-modal disease-related information, we introduce Modality Reward Representation Learning (MRRL), which dynamically constructs population graphs using an Affinity Metric Reward System (AMRS). We also employ a variational autoencoder to reconstruct latent representations of non-imaging features aligned with imaging features. Based on this, we introduce Adaptive Cross-Modal Graph Learning (ACMGL), which captures critical modality-specific and modality-shared features through a unified GTUNet encoder, taking advantages of Graph UNet and Graph Transformer, along with a feature fusion module. We validated our method on two public multi-modal datasets ABIDE and ADHD-200, demonstrating its superior performance in diagnosing BDs. Our code is available at https://github.com/NZWANG/MM-GTUNets.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TMI.2025.3556420 | DOI Listing |