A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Readability of Pediatric Otolaryngology Information: Comparing AI-Generated Content With Google Search Results. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objective: This study evaluates and compares the readability of pediatric otolaryngology patient education materials generated by ChatGPT4o and those retrieved from Google searches. The goal is to determine whether artificial intelligence (AI)-generated content improves accessibility compared to institutionally affiliated online resources.

Study Design: Cross-sectional readability analysis.

Setting: Online educational materials focused on pediatric otolaryngology topics.

Methods: Educational articles covering 10 pediatric otolaryngology conditions were sourced either via Google search or generated using ChatGPT4o. All texts were standardized by removing extraneous formatting. Readability was assessed using six validated metrics: Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Gunning-Fog Index, Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index, and Automated Readability Index (ARI). Statistical comparisons were performed using paired t tests or Wilcoxon signed-rank tests to evaluate differences in scores between sources.

Results: ChatGPT4o-generated content demonstrated significantly higher FKGL, Gunning-Fog, ARI, and SMOG scores and lower FRES scores compared to Google-sourced materials, indicating greater complexity (P < .05). These differences were most pronounced for simpler conditions such as allergic rhinitis and otitis externa. For more complex topics like laryngomalacia and cleft lip and palate, readability scores were not significantly different between the two sources (P > .05).

Conclusion: ChatGPT4o-generated patient education materials are generally more difficult to read than Google-sourced content, especially for less complex conditions. Given the importance of readability in patient education, AI-generated materials may require further refinement to improve accessibility without compromising accuracy. Enhancing clarity could increase the utility of AI tools for educating parents and caregivers in pediatric otolaryngology.

Download full-text PDF

Source
http://dx.doi.org/10.1002/ohn.70011DOI Listing

Publication Analysis

Top Keywords

pediatric otolaryngology
20
patient education
12
readability pediatric
8
ai-generated content
8
google search
8
education materials
8
generated chatgpt4o
8
readability
6
otolaryngology
5
materials
5

Similar Publications