A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1075
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3195
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Hierarchical in-out fusion for incomplete multimodal brain tumor segmentation. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Fusing multimodal data play a crucial role in accurate brain tumor segmentation network and clinical diagnosis, especially in scenarios with incomplete multimodal data. Existing multimodal fusion models usually perform intra-modal fusion at both shallow and deep layers relying predominantly on traditional attention fusion. Rather, using the same fusion strategy at different layers leads to critical issues, feature redundancy in shallow layers due to repetitive weighting of semantically similar low-level features, and progressive texture detail degradation in deeper layers caused by the inherent feature of deep neural networks. Additionally, the absence of intra-modal fusion results in the loss of unique critical information. To better enhance the representation of latent correlation features from every unique critical features, this paper proposes a Hierarchical In-Out Fusion method, the Out-Fusion block performs inter-modal fusion at both shallow and deep layers respectively, in the shallow layers, the SAOut-Fusion block with self-attention extracts texture information; the deepest layer of the network, the DDOut-Fusion block which integrates spatial and frequency domain features, compensates for the loss of texture detail by enhancing the detail of the high frequency component. which utilizes a gating mechanism to effectively combine the tumor's positional structural information and texture details. At the same time, the In-Fusion block is designed for intra-modal fusion, using multiple stacked Transformer-CNN blocks to hierarchical access modality-specific critical signatures. Experimental results on the BraTS2018 and the BraTS2020 datasets validate the superiority of this method, demonstrating improved network robustness and maintaining effectiveness even when certain modalities are missing. Our code is available https://github.com/liufangcoca-515/InOutFusion-main .

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12219404PMC
http://dx.doi.org/10.1038/s41598-025-07466-9DOI Listing

Publication Analysis

Top Keywords

intra-modal fusion
12
fusion
9
hierarchical in-out
8
in-out fusion
8
incomplete multimodal
8
brain tumor
8
tumor segmentation
8
multimodal data
8
fusion shallow
8
shallow deep
8

Similar Publications