98%
921
2 minutes
20
To assess the accuracy of answers provided by ChatGPT-3 when prompted with questions from the daily routine of radiologists and to evaluate the text response when ChatGPT-3 was prompted to provide references for a given answer. ChatGPT-3 (San Francisco, OpenAI) is an artificial intelligence chatbot based on a large language model (LLM) that has been designed to generate human-like text. A total of 88 questions were submitted to ChatGPT-3 using textual prompt. These 88 questions were equally dispersed across 8 subspecialty areas of radiology. The responses provided by ChatGPT-3 were assessed for correctness by cross-checking them with peer-reviewed, PubMed-listed references. In addition, the references provided by ChatGPT-3 were evaluated for authenticity. A total of 59 of 88 responses (67%) to radiological questions were correct, while 29 responses (33%) had errors. Out of 343 references provided, only 124 references (36.2%) were available through internet search, while 219 references (63.8%) appeared to be generated by ChatGPT-3. When examining the 124 identified references, only 47 references (37.9%) were considered to provide enough background to correctly answer 24 questions (37.5%). In this pilot study, ChatGPT-3 provided correct responses to questions from the daily clinical routine of radiologists in only about two thirds, while the remainder of responses contained errors. The majority of provided references were not found and only a minority of the provided references contained the correct information to answer the question. Caution is advised when using ChatGPT-3 to retrieve radiological information.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1177/08465371231171125 | DOI Listing |
J Prosthet Dent
September 2025
Professor, Department of Prosthodontics, Faculty of Dentistry, Gazi University, Ankara, Turkey.
Statement Of Problem: Despite advances in artificial intelligence (AI), the quality, reliability, and understandability of health-related information provided by chatbots is still a question mark. Furthermore, studies on maxillofacial prosthesis (MP) information from AI chatbots are lacking.
Purpose: The purpose of this study was to assess and compare the reliability, quality, readability, and similarity of responses to MP-related questions generated by 4 different chatbots.
Arch Osteoporos
September 2025
Department of Family Medicine, Chang-Gung Memorial Hospital, Linkou Branch, Taoyuan City, Taiwan.
Unlabelled: The study assesses the performance of AI models in evaluating postmenopausal osteoporosis. We found that ChatGPT-4o produced the most appropriate responses, highlighting the potential of AI to enhance clinical decision-making and improve patient care in osteoporosis management.
Purpose: The rise of artificial intelligence (AI) offers the potential for assisting clinical decisions.
Future Sci OA
December 2025
Division of Hematology, University of Miami Sylvester Comprehensive Cancer Center, Miami, FL, USA.
Many patients seek accurate, understandable information about their disease and treatment, turning to the internet or messaging providers. This study aims to validate chatbots' ability to deliver accurate information, contributing to the literature on AI's role in cancer care and helping to improve these tools for patients and caregivers. A set of questions about hematologic malignancies was created with input from oncologists and reputable websites and then submitted to ChatGPT 3.
View Article and Find Full Text PDFWorld J Transplant
September 2025
Center for Research and Innovation in Solid Organ Transplantation, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece.
Background: Kidney and liver transplantation are two sub-specialized medical disciplines, with transplant professionals spending decades in training. While artificial intelligence-based (AI-based) tools could potentially assist in everyday clinical practice, comparative assessment of their effectiveness in clinical decision-making remains limited.
Aim: To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.
Behav Sci (Basel)
August 2025
Department of English, College of Language Sciences, King Saud University, P.O. Box 2460, Riyadh 11451, Saudi Arabia.
Large language models (LLMs) have become extensively used among users across diverse settings. Yet, with the complex nature of these large-scale artificial intelligence (AI) systems, leveraging their capabilities effectively is yet to be explored. In this study, we looked at the types of communication errors that occur in interactions between humans and ChatGPT-3.
View Article and Find Full Text PDF