98%
921
2 minutes
20
Artificial intelligence (AI) tools are increasingly employed in clinical genetics to assist in diagnosing genetic conditions by assessing photographs of patients. For medical uses of AI, explainable AI (XAI) methods offer a promising approach by providing interpretable outputs, such as saliency maps and region relevance visualizations. XAI has been discussed as important for regulatory purposes and to enable clinicians to better understand how AI tools work in practice. However, the real-world effects of XAI on clinician performance, confidence, and trust remain underexplored. This study involved a web-based user experiment with 31 medical geneticists to assess the impact of AI-only diagnostic assistance compared to XAI-supported diagnostics. Participants were randomly assigned to either group and completed diagnostic tasks with 18 facial images of individuals with known genetic syndromes and unaffected individuals, before and after experiencing the AI outputs. The results show that both AI-only and XAI approaches improved diagnostic accuracy and clinician confidence. The effects varied according to the accuracy of AI predictions and the clarity of syndromic features (sample difficulty). While AI support was viewed positively, users approached XAI with skepticism. Interestingly, we found a positive correlation between diagnostic improvement and XAI intervention. Although XAI support did not significantly enhance overall performance relative to AI alone, it prompted users to critically evaluate images with false predictions and influenced their confidence levels. These findings highlight the complexities of trust, perceived usefulness, and interpretability in AI-assisted diagnostics, with important implications for developing and implementing clinical decision-support tools in facial phenotyping for rare genetic diseases.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12191099 | PMC |
http://dx.doi.org/10.1101/2025.06.08.25328588 | DOI Listing |
PLoS One
September 2025
Seidenberg School of Computer Science and Information Systems, Pace University, New York, New York, United States of America.
While there has been extensive research on techniques for explainable artificial intelligence (XAI) to enhance AI recommendations, the metacognitive processes in interacting with AI explanations remain underexplored. This study examines how AI explanations impact human decision-making by leveraging cognitive mechanisms that evaluate the accuracy of AI recommendations. We conducted a large-scale experiment (N = 4,302) on Amazon Mechanical Turk (AMT), where participants classified radiology reports as normal or abnormal.
View Article and Find Full Text PDFPLoS One
September 2025
Mawlana Bhashani Science and Technology University, Tangail, Bangladesh.
Student dropout is a significant challenge in Bangladesh, with serious impacts on both educational and socio-economic outcomes. This study investigates the factors influencing school dropout among students aged 6-24 years, employing data from the 2019 Multiple Indicator Cluster Survey (MICS). The research integrates statistical analysis with machine learning (ML) techniques and explainable AI (XAI) to identify key predictors and enhance model interpretability.
View Article and Find Full Text PDFIEEE Trans Neural Syst Rehabil Eng
September 2025
Given the significant global health burden caused by depression, numerous studies have utilized artificial intelligence techniques to objectively and automatically detect depression. However, existing research primarily focuses on improving the accuracy of depression recognition while overlooking the explainability of detection models and the evaluation of feature importance. In this paper, we propose a novel framework named Enhanced Domain Adversarial Neural Network (E-DANN) for depression detection.
View Article and Find Full Text PDFMed Eng Phys
October 2025
Biomedical Device Technology, Istanbul Aydın University, Istanbul, 34093, Istanbul, Turkey. Electronic address:
Deep learning approaches have improved disease diagnosis efficiency. However, AI-based decision systems lack sufficient transparency and interpretability. This study aims to enhance the explainability and training performance of deep learning models using explainable artificial intelligence (XAI) techniques for brain tumor detection.
View Article and Find Full Text PDFACS Appl Mater Interfaces
September 2025
Graduate School of Frontier Sciences, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8568, Japan.
Chemical sensor arrays mimic the mammalian olfactory system to achieve artificial olfaction, and receptor materials resembling olfactory receptors are being actively developed. To realize practical artificial olfaction, it is essential to provide guidelines for developing effective receptor materials based on the structure-activity relationship. In this study, we demonstrated the visualization of the relationship between sensing signal features and odorant molecular features using an explainable AI (XAI) technique.
View Article and Find Full Text PDF