Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Explainable Artificial Intelligence (XAI) can decode the 'black box' models, enhancing trust in clinical decision-making. XAI makes the predictions of deep learning models interpretable, transparent, and trustworthy. This study employed XAI techniques to explain the predictions made by a deep learning-based model for diagnosing autism and identifying the memory regions responsible for children's academic performance. This study utilized publicly available sMRI data from the ABIDE-II repository. First, a deep learning model, FaithfulNet, was developed to aid in the diagnosis of autism. Next, gradient-based class activation maps and the SHAP gradient explainer were employed to generate explanations for the model's predictions. These explanations were integrated to develop a novel and faithful visual explanation, Faith_CAM. Finally, this faithful explanation was quantified using the pointing game score and analyzed with cortical and subcortical structure masks to identify the impaired brain regions in the autistic brain. This study achieved a classification accuracy of 99.74% with an AUC value of 1. In addition to facilitating autism diagnosis, this study assesses the degree of impairment in memory regions responsible for the children's academic performance, thus contributing to the development of personalized treatment plans.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.brainres.2025.149904DOI Listing

Publication Analysis

Top Keywords

deep learning
12
autism diagnosis
8
predictions deep
8
memory regions
8
regions responsible
8
responsible children's
8
children's academic
8
academic performance
8
faithfulnet explainable
4
deep
4

Similar Publications

Oral bioavailability property prediction based on task similarity transfer learning.

Mol Divers

September 2025

Laboratory of Molecular Design and Drug Discovery, School of Science, China Pharmaceutical University, Nanjing, 211198, China.

Drug absorption significantly influences pharmacokinetics. Accurately predicting human oral bioavailability (HOB) is essential for optimizing drug candidates and improving clinical success rates. The traditional method based on experiment is a common way to obtain HOB, but the experimental method is time-consuming and costly.

View Article and Find Full Text PDF

This study explores how differences in colors presented separately to each eye (binocular color differences) can be identified through EEG signals, a method of recording electrical activity from the brain. Four distinct levels of green-red color differences, defined in the CIELAB color space with constant luminance and chroma, are investigated in this study. Analysis of Event-Related Potentials (ERPs) revealed a significant decrease in the amplitude of the P300 component as binocular color differences increased, suggesting a measurable brain response to these differences.

View Article and Find Full Text PDF

Clinical evaluation of motion robust reconstruction using deep learning in lung CT.

Phys Eng Sci Med

September 2025

Department of Radiology, Otaru General Hospital, Otaru, Hokkaido, Japan.

In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion.

View Article and Find Full Text PDF

Predicting complex time series with deep echo state networks.

Chaos

September 2025

School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA.

Although many real-world time series are complex, developing methods that can learn from their behavior effectively enough to enable reliable forecasting remains challenging. Recently, several machine-learning approaches have shown promise in addressing this problem. In particular, the echo state network (ESN) architecture, a type of recurrent neural network where neurons are randomly connected and only the read-out layer is trained, has been proposed as suitable for many-step-ahead forecasting tasks.

View Article and Find Full Text PDF

Purpose To assess the effectiveness of an explainable deep learning (DL) model, developed using multiparametric MRI (mpMRI) features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs ≥ 1 cm in diameter at mpMRI were included in the study. nn-Unet and Liver Imaging Feature Transformer (LIFT) models were developed using retrospective data from one hospital (January 2018-August 2023).

View Article and Find Full Text PDF