Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Personalized healthcare increasingly relies on AI-driven multimodal fusion to enhance diagnostic precision and treatment planning. However, long MRI acquisition times, imaging artifacts, and missing modalities often lead to incomplete critical imaging information, limiting the application of multimodal MRI in personalized diagnostics. To address this challenge, we propose Dual-Scale Multimodal Fusion Network (Dual-MFNet), a novel AI-driven approach to personalized MRI synthesis for reconstructing missing modalities with high anatomical fidelity. Our method leverages state-space models to capture long-range contextual dependencies while preserving local structural integrity, ensuring accurate cross-modal synthesis. The Dual-Scale Feature Fuser (Dual-Fuser) balances global coherence with fine-grained detail preservation, while the Twin-Stream Fusion module (TSF) dynamically enhances critical cross-modal information. In addition, the Feature Aggregation (FA) module consolidates multimodal input into a cohesive representation, producing high-fidelity synthesized MRI customized to individual patient needs. To assess clinical relevance, we conducted extensive quantitative evaluations and a radiological reader study with five experienced radiologists. The results demonstrate that Dual-MFNet outperforms state-of-the-art methods, particularly in preserving tumor boundaries, fine tissue textures, and anatomical clarity, making it a valuable tool to advance personalized MRI-based diagnostics.

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2025.3601059DOI Listing

Publication Analysis

Top Keywords

multimodal fusion
12
dual-scale multimodal
8
personalized mri
8
mri synthesis
8
missing modalities
8
multimodal
5
personalized
5
mri
5
dual-mfnet ai-driven
4
ai-driven dual-scale
4

Similar Publications

Neuroimaging Data Informed Mood and Psychosis Diagnosis Using an Ensemble Deep Multimodal Framework.

Hum Brain Mapp

September 2025

Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, and Emory University, Atlanta, Georgia, USA.

Investigating neuroimaging data to identify brain-based markers of mental illnesses has gained significant attention. Nevertheless, these endeavors encounter challenges arising from a reliance on symptoms and self-report assessments in making an initial diagnosis. The absence of biological data to delineate nosological categories hinders the provision of additional neurobiological insights into these disorders.

View Article and Find Full Text PDF

Bipolar disorder (BD) is a debilitating mental illness characterized by significant mood swings, posing a substantial challenge for accurate diagnosis due to its clinical complexity. This paper presents CS2former, a novel approach leveraging a dual channel-spatial feature extraction module within a Transformer model to diagnose BD from resting-state functional MRI (Rs-fMRI) and T1-weighted MRI (T1w-MRI) data. CS2former employs a Channel-2D Spatial Feature Aggregation Module to decouple channel and spatial information from Rs-fMRI, while a Channel-3D Spatial Attention Module with Synchronized Attention Module (SAM) concurrently computes attention for T1w-MRI feature maps.

View Article and Find Full Text PDF

Background And Objective: The early detection of breast cancer plays a critical role in improving survival rates and facilitating precise medical interventions. Therefore, the automated identification of breast abnormalities becomes paramount, significantly enhancing the prospects of successful treatment outcomes. To address this imperative, our research leverages multiple modalities such as MRI, CT, and mammography to detect and screen for breast cancer.

View Article and Find Full Text PDF

Drug-target interaction (DTI) prediction is essential for the development of novel drugs and the repurposing of existing ones. However, when the features of drug and target are applied to biological networks, there is a lack of capturing the relational features of drug-target interactions. And the corresponding multimodal models mainly depend on shallow fusion strategies, which results in suboptimal performance when trying to capture complex interaction relationships.

View Article and Find Full Text PDF

Force prediction is crucial for functional rehabilitation of the upper limb. Surface electromyography (sEMG) signals play a pivotal role in muscle force studies, but its non-stationarity challenges the reliability of sEMG-driven models. This problem may be alleviated by fusion with electrical impedance myography (EIM), an active sensing technique incorporating tissue morphology information.

View Article and Find Full Text PDF