A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Artificial Intelligence Chatbots in Pediatric Emergencies: A Reliable Lifeline or a Risk? | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Introduction Artificial intelligence (AI) chatbots have rapidly gained popularity for disseminating health information, especially with the growth of digital medicine in recent times. Recent studies have shown that Chat Generative Pre-Trained Transformer (ChatGPT; OpenAI, San Francisco, CA), a widely used AI chatbot, has at times surpassed emergency department physicians in diagnostic accuracy and has passed basic life support (BLS) exams, underscoring its potential for emergency use. Parents are a key demographic for online health information, frequently turning to these chatbots for urgent guidance during child-related emergencies, such as choking incidents. While research has extensively examined AI chatbots' effectiveness in delivering adult BLS guidelines, their accuracy and reliability in providing pediatric BLS guidance aligned with American Heart Association (AHA) standards remain underexplored. This gap raises concerns about the safety and appropriateness of relying on AI chatbots for guidance in pediatric emergencies. In light of this, we hoped that comparing the performance of two ChatGPT versions, ChatGPT-4o and ChatGPT-4o mini, against established pediatric protocols by AHA could help optimize their integration into emergency response frameworks, providing parents with reliable assistance in critical situations. This analysis can pinpoint improvements for real-world integration, ensuring trustworthy assistance in critical situations. Methodology A prospective comparative content analysis was conducted between responses from ChatGPT (version 4o and its mini version) against the 2020 AHA Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. The analysis focused on pediatric BLS, utilizing 13 broad questions designed to cover all key components, including fundamental concepts like the pediatric chain of survival and specific emergencies such as choking. Responses were evaluated for completeness and conformity to AHA guidelines. Completeness of the responses was analyzed as 'Completely Addressed', 'Partially Addressed', or 'Not Addressed', with partial responses further classified as 'Superficial', 'Inaccurate', or 'Hallucination'. Conformity of responses to AHA 2020 guidelines was similarly analyzed and classified. Assessment of reliability was performed using Cronbach's alpha. Cohen's kappa was used to check for interrater agreement between responses generated from two separate devices for the same set of questions. Results Content analysis of ChatGPT responses revealed that only 9.61% were fully addressed, and just 5.77% fully conformed to the AHA 2020 pediatric BLS guidelines. A majority of the responses (61.54%) were partially addressed and lacked depth, while 59.61% conformed only partially and superficially to the guidelines. Additionally, 5.77% of the queries were not addressed at all. ChatGPT-4o responses were generally more detailed and comprehensive compared to those from ChatGPT-4o mini. Inter-rater agreement ranged from slight to substantial between the two users. Conclusions While chatbots may assist with basic guidance, they lack the accuracy, depth, and hands-on instruction crucial for life-saving procedures. Misinterpretation or incomplete information from chatbots could lead to critical errors in emergencies. Hence, widespread BLS training remains essential for ensuring individuals have the practical skills and precise knowledge needed to respond effectively in real-life situations.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12401188PMC
http://dx.doi.org/10.7759/cureus.89234DOI Listing

Publication Analysis

Top Keywords

pediatric bls
12
responses
9
artificial intelligence
8
intelligence chatbots
8
pediatric emergencies
8
emergencies choking
8
bls guidelines
8
chatgpt-4o mini
8
assistance critical
8
critical situations
8

Similar Publications