Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background: The decision-making processes of automated vehicles (AVs) can confuse users and reduce trust, highlighting the need for clear and human-centric explanations. Such explanations can help users understand AV actions, facilitate smooth control transitions and enhance transparency, acceptance, and trust. Critically, such explanations could improve situational awareness and support timely, appropriate human responses, thereby reducing the risk of misuse, unexpected automated decisions, and delayed reactions in safety-critical scenarios. However, current literature offers limited insight into how different types of explanations impact drivers in diverse scenarios and the methods for evaluating their quality. This paper systematically reviews what, when and how to provide human-centric explanations in AV contexts.

Methods: The systematic review followed PRISMA guidelines, and covered five databases-Scopus, Web of Science, IEEE Xplore, TRID, and Semantic Scholar-from 2000 to April 2024. Out of 266 identified articles, 59 met the inclusion criteria.

Results: Providing a detailed content explanation following AV's driving actions in real time does not always increase user trust and acceptance. Explanations that clarify the reasoning behind actions are more effective than those merely describing actions. Providing explanations before action is recommended, though the optimal timing remains uncertain. Multimodal explanations (visual and audio) are most effective when each mode conveys unique information; otherwise, visual-only explanations are preferred. The narrative perspective (first-person vs. third-person) also impacts user trust differently across scenarios.

Conclusions: The review underscores the importance of tailoring human-centric explanations to specific driving contexts. Future research should address explanation length, timing, and modality coordination and focus on real-world studies to enhance generalisability. These insights are vital for advancing the research of human-centric explanations in AV systems and fostering safer, more trustworthy human-vehicle interactions, ultimately reducing the risk of inappropriate reactions, delayed responses, or user error in traffic settings.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.aap.2025.108152DOI Listing

Publication Analysis

Top Keywords

human-centric explanations
20
explanations
11
automated vehicles
8
systematic review
8
reducing risk
8
user trust
8
human-centric
5
explanations users
4
users automated
4
vehicles systematic
4

Similar Publications

Explainable AI for time series prediction in economic mental health analysis.

Front Med (Lausanne)

June 2025

Taizhou Vocation College of Science Technology, School of Accounting Finance, Taizhou, Zhejiang, China.

Introduction: The integration of Explainable Artificial Intelligence (XAI) into time series prediction plays a pivotal role in advancing economic mental health analysis, ensuring both transparency and interpretability in predictive models. Traditional deep learning approaches, while highly accurate, often operate as black boxes, making them less suitable for high-stakes domains such as mental health forecasting, where explainability is critical for trust and decision-making. Existing explainability methods provide only partial insights, limiting their practical application in sensitive domains like mental health analytics.

View Article and Find Full Text PDF

Background: The decision-making processes of automated vehicles (AVs) can confuse users and reduce trust, highlighting the need for clear and human-centric explanations. Such explanations can help users understand AV actions, facilitate smooth control transitions and enhance transparency, acceptance, and trust. Critically, such explanations could improve situational awareness and support timely, appropriate human responses, thereby reducing the risk of misuse, unexpected automated decisions, and delayed reactions in safety-critical scenarios.

View Article and Find Full Text PDF

Trustworthy and Human Centric neural network approaches for prediction of landfill methane emission and sustainable waste management practices.

Waste Manag

March 2025

Department of Design and Automation, Cyber-Physical Systems Lab, School of Mechanical Engineering, Vellore Institute of Technology, Vellore, 632014 Tamilnadu, India. Electronic address:

Landfills rank third among the anthropogenic sources of methane gas in the atmosphere, hence there is a need for greater emphasis on the quantification of landfill methane emission for mitigating environmental degradation. However, the estimation and prediction of landfill methane emission is a challenge as the modeling complexity of methane generation involves different chemical, biological and physical reactions. Various machine learning techniques lacks in providing explainability and the context in addressing uncertainties of landfill emission.

View Article and Find Full Text PDF

Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there's been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how.

View Article and Find Full Text PDF

Background: Human-centric artificial intelligence (HCAI) aims to provide support systems that can act as peer companions to an expert in a specific domain, by simulating their way of thinking and decision-making in solving real-life problems. The gynaecological artificial intelligence diagnostics (GAID) assistant is such a system. Based on artificial intelligence (AI) argumentation technology, it was developed to incorporate, as much as possible, a complete representation of the medical knowledge in gynaecology and to become a real-life tool that will practically enhance the quality of healthcare services and reduce stress for the clinician.

View Article and Find Full Text PDF