Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Background: To compare the readability of patient education materials (PEMs) on rhinologic conditions and procedures from the American Rhinologic Society (ARS) with those generated by large language models (LLMs).
Methods: Forty-one PEMs from the ARS were retrieved. Readability was assessed through the Flesch Kincaid Reading Ease (FKRE) and Flesch Kincaid Grade Level (FKGL), in which higher FKRE and lower FKGL scores indicate better readability. Three LLMs-ChatGPT 4.o, Google Gemini, and Microsoft Copilot-were then used to translate each ARS PEM to the recommended sixth-grade reading level. Readability scores were calculated and compared for each translated PEM.
Results: A total of 164 PEMs were evaluated, including 123 generated by LLMs. The original ARS PEMs had a mean FKGL of 10.28, while AI-generated PEMs demonstrated significantly better readability, with a mean FKGL of 8.6 ( < .0001). Among the AI platforms, Gemini was the most easily readable, reaching a mean FKGL of 7.5 and FKRE of 65.5.
Conclusion: LLMs improved the readability of PEMs, potentially enhancing accessibility to medical information for diverse populations. Despite these findings, healthcare providers and patients should cautiously appraise LLM-generated content, particularly for rhinology conditions and procedures.
Level Of Evidence: N/A.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1177/00034894251342969 | DOI Listing |