98%
921
2 minutes
20
Convolutional Neural Networks (CNNs) are frequently and successfully used in medical prediction tasks. They are often used in combination with transfer learning, leading to improved performance when training data for the task are scarce. The resulting models are highly complex and typically do not provide any insight into their predictive mechanisms, motivating the field of "explainable" artificial intelligence (XAI). However, previous studies have rarely quantitatively evaluated the "explanation performance" of XAI methods against ground-truth data, and transfer learning and its influence on objective measures of explanation performance has not been investigated. Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task. We employ this benchmark to understand the influence of transfer learning on the quality of explanations. Experimental results show that popular XAI methods applied to the same underlying model differ vastly in performance, even when considering only correctly classified examples. We further observe that explanation performance strongly depends on the task used for pre-training and the number of CNN layers pre-trained. These results hold after correcting for a substantial correlation between explanation and classification performance.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10925627 | PMC |
http://dx.doi.org/10.3389/frai.2024.1330919 | DOI Listing |
Vet Anim Sci
December 2025
Department of Veterinary Medical Sciences, Bologna Università, Via Tolara di Sopra 50, Ozzano dell'Emilia 40064, Italy.
This paper describes the more frequent chiropractic alterations in healthy and sick foals. The assessment is performed through a motion palpation exam, which locates the hypomobile joints along the skeleton. The motion palpation exam allowed the identification of multiple hypomobile areas in neonatal foals.
View Article and Find Full Text PDFJB JS Open Access
September 2025
Department of Orthopaedic Surgery, St. Luke's University Health Network, Bethlehem, Pennsylvania.
Background: The use of artificial intelligence platforms by medical residents as an educational resource is increasing. Within orthopaedic surgery, older Chat Generative Pre-trained Transformer (ChatGPT) models performed worse than resident physicians on practice examinations and rarely answered questions with images correctly. The newer ChatGPT-4o was designed to improve these deficiencies but has not been evaluated.
View Article and Find Full Text PDFFront Pharmacol
August 2025
Department of Cardiovascular Surgery, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China.
Background: Acute myocardial infarction (AMI) patients with prior malignancy have been largely understudied, despite potentially facing higher risks of adverse outcomes. This case-control study aimed to identify independent risk factors for in-hospital mechanical complications among AMI patients with prior malignancies.
Methods: This study enrolled AMI patients with prior malignancy who were hospitalized for treatment.
Front Physiol
August 2025
Department of Ultrasound, Deyang People's Hospital, Deyang, Sichuan, China.
Background: Antiphospholipid syndrome (APS) is a major immune-related disorder that leads to adverse pregnancy outcomes (APO), including recurrent miscarriage, placental abruption, preterm birth, and fetal growth restriction. Antiphospholipid antibodies (aPLs), particularly anticardiolipin antibodies (aCL), anti-β2-glycoprotein I antibodies (aβ2GP1), and lupus anticoagulant (LA), are considered key biomarkers for APS and are closely associated with adverse pregnancy outcomes. This is a prospective observational cohort study to use machine learning model to predict adverse pregnancy outcomes in APS patients using early pregnancy aPL levels and clinical features.
View Article and Find Full Text PDFMed Eng Phys
October 2025
Biomedical Device Technology, Istanbul Aydın University, Istanbul, 34093, Istanbul, Turkey. Electronic address:
Deep learning approaches have improved disease diagnosis efficiency. However, AI-based decision systems lack sufficient transparency and interpretability. This study aims to enhance the explainability and training performance of deep learning models using explainable artificial intelligence (XAI) techniques for brain tumor detection.
View Article and Find Full Text PDF