Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Introduction: There are barriers that exist for individuals to adhere to cardiovascular rehabilitation programs. A key driver to patient adherence is appropriately educating patients. A growing education tool is using large language models to answer patient questions.
Methods: The primary objective of this study was to evaluate the readability quality of educational responses provided by large language models for questions regarding cardiac rehabilitation using Gunning Fog, Flesh Kincaid, and Flesch Reading Ease scores.
Results: The findings of this study demonstrate that the mean Gunning Fog, Flesch Kincaid, and Flesch Reading Ease scores do not meet US grade reading level recommendations across three models: ChatGPT 3.5, Copilot, and Gemini. The Gemini and Copilot models demonstrated greater ease of readability compared to ChatGPT 3.5.
Conclusions: Large language models could serve as educational tools on cardiovascular rehabilitation, but there remains a need to improve the text readability for these to effectively educate patients.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11827661 | PMC |