Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background: Artificial intelligence-based clinical decision support systems (AI-CDSSs) have enhanced personalized medicine and improved the efficiency of health care workers. Despite these opportunities, trust in these tools remains a critical factor for their successful integration into practice. Existing research lacks synthesized insights and actionable recommendations to guide the development of AI-CDSSs that foster trust among health care workers.

Objective: This systematic review aims to identify and synthesize key factors that influence health care workers' trust in AI-CDSSs and to provide actionable recommendations for enhancing their trust in these systems.

Methods: We conducted a systematic review of published studies from January 2020 to November 2024, retrieved from PubMed, Scopus, and Google Scholar. Inclusion criteria focused on studies that examined health care workers' perceptions, experiences, and trust in AI-CDSSs. Studies in non-English languages and those unrelated to health care settings were excluded. Two independent reviewers followed the Cochrane Collaboration Handbook and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines. Analysis was conducted using a developed data charter. The Critical Appraisal Skills Programme tool was applied to assess the quality of the included studies and to evaluate the risk of bias, ensuring a rigorous and systematic review process.

Results: A total of 27 studies met the inclusion criteria, involving diverse health care workers, predominantly in hospitalized settings. Qualitative methods were the most common (n=16, 59%), with sample sizes ranging from small focus groups to cohorts of over 1000 participants. Eight key themes emerged as pivotal in improving health care workers' trust in AI-CDSSs: (1) System Transparency, emphasizing the need for clear and interpretable AI; (2) Training and Familiarity, highlighting the importance of knowledge sharing and user education; (3) System Usability, focusing on effective integration into clinical workflows; (4) Clinical Reliability, addressing the consistency and accuracy of system performance; (5) Credibility and Validation, referring to how well the system performs across diverse clinical contexts; (6) Ethical Consideration, examining medicolegal liability, fairness, and adherence to ethical standards;(7) Human Centric Design, pioritizing patient centered approaches; (8) Customization and Control, highlighting the need to tailor tools to specific clinical needs while preserving health care providers' decision-making autonomy. Barriers to trust included algorithmic opacity, insufficient training, and ethical challenges, while enabling factors for health care workers' trust in AI-CDSS tools were transparency, usability, and clinical reliability.

Conclusions: The findings highlight the need for explainable AI models, comprehensive training, stakeholder involvement, and human-centered design to foster health care workers' trust in AI-CDSSs. Although the heterogeneity of study designs and lack of specific data limit further analysis, this review bridges existing gaps by identifying key themes that support trust in AI-CDSSs. It also recommends that future research include diverse demographics, cross-cultural perspectives, and contextual differences in trust across various health care professions.

Download full-text PDF

Source
http://dx.doi.org/10.2196/69678DOI Listing

Publication Analysis

Top Keywords

health care
48
care workers'
20
trust ai-cdsss
20
systematic review
16
workers' trust
16
trust
12
health
12
care
12
care workers
12
artificial intelligence-based
8

Similar Publications

Objectives: The aim of this study was to explore contributing factors identified in serious incident investigations conducted by internal, independent multidisciplinary teams.

Methods: A total of 166 serious incident investigation reports, conducted between 2018 and 2023 in 11 integrated social and health care organizations in Finland, were analyzed. The reports were classified by incident type and contributing factor, which were analyzed using the WHO's Conceptual Framework for the International Classification for Patient Safety.

View Article and Find Full Text PDF

Background: Chest radiography is often performed preoperatively as a common diagnostic tool. However, chest radiography carries the risk of radiation exposure. Given the uncertainty surrounding the utility of preoperative chest radiographs, physicians require systematically developed recommendations.

View Article and Find Full Text PDF

Purpose: The fourth phase of the Electronic Medical Records and Genome Network (eMERGE4) is testing the return of 10 polygenic risk scores (PRS) across multiple clinics. Understanding the perspectives of health-system leaders and frontline clinicians can inform plans for implementation of PRS.

Methods: Fifteen health-system leaders and 20 primary care providers (PCPs) took part in semi-structured interviews.

View Article and Find Full Text PDF

Objectives: To assess changes in greenhouse gas emission rates associated with the use of anaesthetic gases (desflurane, sevoflurane, and isoflurane) in Australian health care during 2002-2022, overall and by state or territory and hospital type.

Study Design: Retrospective descriptive analysis of IQVIA anaesthetic gases purchasing data.

Setting: All Australian public and private hospitals, 1 January 2002 - 31 December 2022.

View Article and Find Full Text PDF