98%
921
2 minutes
20
Background: The decision-making processes of automated vehicles (AVs) can confuse users and reduce trust, highlighting the need for clear and human-centric explanations. Such explanations can help users understand AV actions, facilitate smooth control transitions and enhance transparency, acceptance, and trust. Critically, such explanations could improve situational awareness and support timely, appropriate human responses, thereby reducing the risk of misuse, unexpected automated decisions, and delayed reactions in safety-critical scenarios. However, current literature offers limited insight into how different types of explanations impact drivers in diverse scenarios and the methods for evaluating their quality. This paper systematically reviews what, when and how to provide human-centric explanations in AV contexts.
Methods: The systematic review followed PRISMA guidelines, and covered five databases-Scopus, Web of Science, IEEE Xplore, TRID, and Semantic Scholar-from 2000 to April 2024. Out of 266 identified articles, 59 met the inclusion criteria.
Results: Providing a detailed content explanation following AV's driving actions in real time does not always increase user trust and acceptance. Explanations that clarify the reasoning behind actions are more effective than those merely describing actions. Providing explanations before action is recommended, though the optimal timing remains uncertain. Multimodal explanations (visual and audio) are most effective when each mode conveys unique information; otherwise, visual-only explanations are preferred. The narrative perspective (first-person vs. third-person) also impacts user trust differently across scenarios.
Conclusions: The review underscores the importance of tailoring human-centric explanations to specific driving contexts. Future research should address explanation length, timing, and modality coordination and focus on real-world studies to enhance generalisability. These insights are vital for advancing the research of human-centric explanations in AV systems and fostering safer, more trustworthy human-vehicle interactions, ultimately reducing the risk of inappropriate reactions, delayed responses, or user error in traffic settings.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.aap.2025.108152 | DOI Listing |
Front Med (Lausanne)
June 2025
Taizhou Vocation College of Science Technology, School of Accounting Finance, Taizhou, Zhejiang, China.
Introduction: The integration of Explainable Artificial Intelligence (XAI) into time series prediction plays a pivotal role in advancing economic mental health analysis, ensuring both transparency and interpretability in predictive models. Traditional deep learning approaches, while highly accurate, often operate as black boxes, making them less suitable for high-stakes domains such as mental health forecasting, where explainability is critical for trust and decision-making. Existing explainability methods provide only partial insights, limiting their practical application in sensitive domains like mental health analytics.
View Article and Find Full Text PDFAccid Anal Prev
September 2025
Queensland University of Technology, Centre for Accident Research and Road Safety - Queensland (CARRS-Q), 130 Victoria Road, Kelvin Grove 4059, Australia. Electronic address:
Background: The decision-making processes of automated vehicles (AVs) can confuse users and reduce trust, highlighting the need for clear and human-centric explanations. Such explanations can help users understand AV actions, facilitate smooth control transitions and enhance transparency, acceptance, and trust. Critically, such explanations could improve situational awareness and support timely, appropriate human responses, thereby reducing the risk of misuse, unexpected automated decisions, and delayed reactions in safety-critical scenarios.
View Article and Find Full Text PDFWaste Manag
March 2025
Department of Design and Automation, Cyber-Physical Systems Lab, School of Mechanical Engineering, Vellore Institute of Technology, Vellore, 632014 Tamilnadu, India. Electronic address:
Landfills rank third among the anthropogenic sources of methane gas in the atmosphere, hence there is a need for greater emphasis on the quantification of landfill methane emission for mitigating environmental degradation. However, the estimation and prediction of landfill methane emission is a challenge as the modeling complexity of methane generation involves different chemical, biological and physical reactions. Various machine learning techniques lacks in providing explainability and the context in addressing uncertainties of landfill emission.
View Article and Find Full Text PDFFront Artif Intell
October 2024
HU University of Applied Sciences Utrecht, Research Group Artificial Intelligence, Utrecht, Netherlands.
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there's been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how.
View Article and Find Full Text PDFHealthcare (Basel)
January 2024
Medical School, Nicosia of University, Nicosia 2408, Cyprus.
Background: Human-centric artificial intelligence (HCAI) aims to provide support systems that can act as peer companions to an expert in a specific domain, by simulating their way of thinking and decision-making in solving real-life problems. The gynaecological artificial intelligence diagnostics (GAID) assistant is such a system. Based on artificial intelligence (AI) argumentation technology, it was developed to incorporate, as much as possible, a complete representation of the medical knowledge in gynaecology and to become a real-life tool that will practically enhance the quality of healthcare services and reduce stress for the clinician.
View Article and Find Full Text PDF