Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information.

Can Assoc Radiol J

Department of Diagnostic Imaging, Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada.

Published: February 2024


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

To assess the accuracy of answers provided by ChatGPT-3 when prompted with questions from the daily routine of radiologists and to evaluate the text response when ChatGPT-3 was prompted to provide references for a given answer. ChatGPT-3 (San Francisco, OpenAI) is an artificial intelligence chatbot based on a large language model (LLM) that has been designed to generate human-like text. A total of 88 questions were submitted to ChatGPT-3 using textual prompt. These 88 questions were equally dispersed across 8 subspecialty areas of radiology. The responses provided by ChatGPT-3 were assessed for correctness by cross-checking them with peer-reviewed, PubMed-listed references. In addition, the references provided by ChatGPT-3 were evaluated for authenticity. A total of 59 of 88 responses (67%) to radiological questions were correct, while 29 responses (33%) had errors. Out of 343 references provided, only 124 references (36.2%) were available through internet search, while 219 references (63.8%) appeared to be generated by ChatGPT-3. When examining the 124 identified references, only 47 references (37.9%) were considered to provide enough background to correctly answer 24 questions (37.5%). In this pilot study, ChatGPT-3 provided correct responses to questions from the daily clinical routine of radiologists in only about two thirds, while the remainder of responses contained errors. The majority of provided references were not found and only a minority of the provided references contained the correct information to answer the question. Caution is advised when using ChatGPT-3 to retrieve radiological information.

Download full-text PDF

Source
http://dx.doi.org/10.1177/08465371231171125DOI Listing

Publication Analysis

Top Keywords

provided chatgpt-3
12
chatgpt-3
10
references
10
chatgpt-3 prompted
8
questions daily
8
routine radiologists
8
references provided
8
correct responses
8
provided references
8
provided
7

Similar Publications

Statement Of Problem: Despite advances in artificial intelligence (AI), the quality, reliability, and understandability of health-related information provided by chatbots is still a question mark. Furthermore, studies on maxillofacial prosthesis (MP) information from AI chatbots are lacking.

Purpose: The purpose of this study was to assess and compare the reliability, quality, readability, and similarity of responses to MP-related questions generated by 4 different chatbots.

View Article and Find Full Text PDF

Unlabelled: The study assesses the performance of AI models in evaluating postmenopausal osteoporosis. We found that ChatGPT-4o produced the most appropriate responses, highlighting the potential of AI to enhance clinical decision-making and improve patient care in osteoporosis management.

Purpose: The rise of artificial intelligence (AI) offers the potential for assisting clinical decisions.

View Article and Find Full Text PDF

Many patients seek accurate, understandable information about their disease and treatment, turning to the internet or messaging providers. This study aims to validate chatbots' ability to deliver accurate information, contributing to the literature on AI's role in cancer care and helping to improve these tools for patients and caregivers. A set of questions about hematologic malignancies was created with input from oncologists and reputable websites and then submitted to ChatGPT 3.

View Article and Find Full Text PDF

Background: Kidney and liver transplantation are two sub-specialized medical disciplines, with transplant professionals spending decades in training. While artificial intelligence-based (AI-based) tools could potentially assist in everyday clinical practice, comparative assessment of their effectiveness in clinical decision-making remains limited.

Aim: To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.

View Article and Find Full Text PDF

Large language models (LLMs) have become extensively used among users across diverse settings. Yet, with the complex nature of these large-scale artificial intelligence (AI) systems, leveraging their capabilities effectively is yet to be explored. In this study, we looked at the types of communication errors that occur in interactions between humans and ChatGPT-3.

View Article and Find Full Text PDF