Developing and validating a Chinese human-automation trust scale: Advancing trust measurement of emerging automation in sustainable ergonomics.

Appl Ergon

Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba, Ibaraki, Japan; Center for Artificial Intelligence Research, University of Tsukuba, Tsukuba, Ibaraki, Japan.

Published: May 2025


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Measuring humans' learned trust in emerging automation systems across different trust development stages is important for fostering a sustainable and human-centered human-automation interaction. Given the notable differences in human-automation trust between Chinese culture and other cultures, particularly Western cultures, the development of an effective measurement tool for human-automation trust within Chinese cultural context is indispensable. This study aimed to develop a Chinese version of the Human-Automation Trust Scale (C-HATS) with reasonable reliability and validity, based on several existing theories and scales related to human-automation trust. Following three phases of assessments, including exploratory factor analysis, item analysis, and confirmatory factor analysis, the scale demonstrated reasonable reliability and validity for both initial and post-task trust assessments. However, certain items of our C-HATS should be separately applied when assessing initial and post-task trust. Furthermore, it is crucial to acknowledge the structural differences between initial and post-task trust. Post-task trust consists of three factors: performance, process, and purpose-based trust, whereas initial trust consists of only two dimensions: cognition-based and affect-based trust. These distinctions should be considered when evaluating the subfacets of initial and post-task trust. Although further validation is required, the developed C-HATS has the potential to assess initial and post-task human-automation trust within Chinese cultural context across various automation systems.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.apergo.2025.104477DOI Listing

Publication Analysis

Top Keywords

human-automation trust
24
initial post-task
20
post-task trust
20
trust
17
trust chinese
12
trust scale
8
emerging automation
8
automation systems
8
chinese cultural
8
cultural context
8

Similar Publications

ObjectiveWe examined whether allowing operators to self-select automation transparency level (adaptable transparency) could improve accuracy of automation use compared to nonadaptable (fixed) low and high transparency. We examined factors underlying higher transparency selection (decision risk, perceived difficulty).BackgroundIncreased fixed transparency typically improves automation use accuracy but can increase bias toward agreeing with automated advice.

View Article and Find Full Text PDF

As automation becomes increasingly integrated into complex military tasks, its role in supporting human performance under fatigue warrants careful evaluation. A specific military use case in which automatic target cuing (ATC) is integrated is undersea threat detection (UTD). These types of tasks demand sustained vigilance, accurate classification, and reliable metacognitive judgements.

View Article and Find Full Text PDF

Autonomous vehicles (AV) offer promising benefits to society in terms of safety, environmental impact and increased mobility. However, acute challenges persist with any novel technology, inlcuding the perceived risks and trust underlying public acceptance. While research examining the current state of AV public perceptions and future challenges related to both societal and individual barriers to trust and risk perceptions is emerging, it is highly fragmented across disciplines.

View Article and Find Full Text PDF

ObjectiveWe investigated how various error patterns from an AI aid in the nonbinary decision scenario influence human operators' trust in the AI system and their task performance.BackgroundExisting research on trust in automation/autonomy predominantly uses the signal detection theory (SDT) to model autonomy performance. The SDT classifies the world into binary states and hence oversimplifies the interaction observed in real-world scenarios.

View Article and Find Full Text PDF

ObjectiveTo examine operator state variables (workload, fatigue, trust in automation, task engagement) that potentially predict return-to-manual (RTM) performance after automation fails to complete a task action.BackgroundLimited research has examined the extent to which within-person variability in operator states predicts RTM performance, a prerequisite to adapting work systems based on expected performance degradation/operator strain. We examine whether operator states differentially predict RTM performance as a function of degree of automation (DOA).

View Article and Find Full Text PDF