Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objectives:  Educating pediatric patients and their caregivers about the disease is crucial for improving treatment adherence, recognizing complications early, and alleviating anxiety. AI tools such as ChatGPT and Google Gemini offer personalized education, benefiting patients and providers, and are increasingly utilized in healthcare. This study aims to compare patient education guides created by ChatGPT and Google Gemini for acute otitis media, pneumonia, and pharyngitis.

Methods: Patient information guides on pediatric diseases generated by ChatGPT and Google Gemini were evaluated by comparing various variables (words, sentences, average words per sentence, average syllables per word, grade level, and ease score) and further assessed for ease using the Flesch-Kincaid calculator, similarity using Quillbot, and reliability using the Modified Discern score. Statistical analysis was done using R v4.3.2.

Results: Both tools' responses were statistically compared. No significant difference was found in word count (ChatGPT: 477.3; Google Gemini: 394.0; p=0.0765) or sentences (ChatGPT: 35.33; Google Gemini: 46.33; p=0.184). Google Gemini scored slightly higher in ease (ChatGPT: 37.79; Google Gemini: 57.10) and grade level (ChatGPT: 11.40; Google Gemini: 7.43), but these were not statistically significant (p>0.05), indicating no clear superiority.

Conclusions For Practice: In a comparison of patient education guides created by both tools for acute otitis media, pneumonia, and pharyngitis, there was no statistically significant difference to determine the superiority of one AI tool over the other. Further studies should comprehensively evaluate various AI tools across a broader range of diseases. It is also important to assess whether AI tools can provide real-time, verifiable content based on the latest medical advancements.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12377924PMC
http://dx.doi.org/10.7759/cureus.88824DOI Listing

Publication Analysis

Top Keywords

google gemini
36
chatgpt google
12
google
9
gemini
9
patient guides
8
chatgpt
8
patient education
8
education guides
8
guides created
8
acute otitis
8

Similar Publications

Unlabelled: The study assesses the performance of AI models in evaluating postmenopausal osteoporosis. We found that ChatGPT-4o produced the most appropriate responses, highlighting the potential of AI to enhance clinical decision-making and improve patient care in osteoporosis management.

Purpose: The rise of artificial intelligence (AI) offers the potential for assisting clinical decisions.

View Article and Find Full Text PDF

Evaluating anti-LGBTQIA+ medical bias in large language models.

PLOS Digit Health

September 2025

Department of Dermatology, Stanford University, Stanford, California, United States of America.

Large Language Models (LLMs) are increasingly deployed in clinical settings for tasks ranging from patient communication to decision support. While these models demonstrate race-based and binary gender biases, anti-LGBTQIA+ bias remains understudied despite documented healthcare disparities affecting these populations. In this work, we evaluated the potential of LLMs to propagate anti-LGBTQIA+ medical bias and misinformation.

View Article and Find Full Text PDF

Background: Drug-drug interactions (DDIs) are a critical clinical concern, especially when administering multiple medications, including antidotes. Despite their lifesaving potential, antidotes may interact harmfully with other drugs. However, few studies have specifically investigated DDIs involving antidotes.

View Article and Find Full Text PDF

Background and aim Orthodontic treatment planning is a complex process requiring a detailed understanding of dental, skeletal, and soft tissue relationships. Traditionally, treatment decisions are made through clinical expertise and evidence-based guidelines. However, the recent evolution of AI, particularly large language models (LLMs), has warranted an evaluation of their capabilities in streamlining clinical workflows.

View Article and Find Full Text PDF

Purpose  This study evaluates the performance of ChatGPT and Google Gemini in addressing refractive surgery-related patient questions by analysing the accuracy, completeness, and readability of their responses. Methods A total of 40 refractive surgery-related questions were compiled and categorized into three levels of difficulty: easy, medium, and hard. Responses from ChatGPT and Google Gemini were blinded and evaluated by two experienced ophthalmologists using standardized criteria.

View Article and Find Full Text PDF