Potential of GPT-4 for Detecting Errors in Radiology Reports: Implications for Reporting Accuracy.

Radiology

From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62

Published: April 2024


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background Errors in radiology reports may occur because of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload. Large language models, such as GPT-4 (ChatGPT; OpenAI), may assist in generating reports. Purpose To assess effectiveness of GPT-4 in identifying common errors in radiology reports, focusing on performance, time, and cost-efficiency. Materials and Methods In this retrospective study, 200 radiology reports (radiography and cross-sectional imaging [CT and MRI]) were compiled between June 2023 and December 2023 at one institution. There were 150 errors from five common error categories (omission, insertion, spelling, side confusion, and other) intentionally inserted into 100 of the reports and used as the reference standard. Six radiologists (two senior radiologists, two attending physicians, and two residents) and GPT-4 were tasked with detecting these errors. Overall error detection performance, error detection in the five error categories, and reading time were assessed using Wald χ tests and paired-sample tests. Results GPT-4 (detection rate, 82.7%;124 of 150; 95% CI: 75.8, 87.9) matched the average detection performance of radiologists independent of their experience (senior radiologists, 89.3% [134 of 150; 95% CI: 83.4, 93.3]; attending physicians, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; residents, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; value range, .522-.99). One senior radiologist outperformed GPT-4 (detection rate, 94.7%; 142 of 150; 95% CI: 89.8, 97.3; = .006). GPT-4 required less processing time per radiology report than the fastest human reader in the study (mean reading time, 3.5 seconds ± 0.5 [SD] vs 25.1 seconds ± 20.1, respectively; < .001; Cohen = -1.08). The use of GPT-4 resulted in lower mean correction cost per report than the most cost-efficient radiologist ($0.03 ± 0.01 vs $0.42 ± 0.41; < .001; Cohen = -1.12). Conclusion The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentially reducing work hours and cost. © RSNA, 2024 See also the editorial by Forman in this issue.

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.232714DOI Listing

Publication Analysis

Top Keywords

150 95%
20
radiology reports
16
errors radiology
12
error detection
12
detection rate
12
detecting errors
8
gpt-4
8
error categories
8
senior radiologists
8
attending physicians
8

Similar Publications

(1) Subcutaneous or intra-abdominal injections of 8 mg of HgCl2/100 g body weight markedly depressed hepatic fatty acid synthetase activity of chicks at 1 h post-injection. The depression occurred despite the fact that the chicks continued to eat up until the time they were killed. Under these same conditions, the hepatic activity of acetyl-CoA carboxylase (EC 6.

View Article and Find Full Text PDF
Article Synopsis
  • The study measured fibrinogen fluorescence at temperatures between 20 and 80 degrees Celsius across different pH levels.
  • It was found that raising the temperature from 20 to 40 degrees Celsius did not change the structure of fibrinogen in solutions with pH between 4.5 and 9.3.
  • However, temperatures between 40 to 50 degrees Celsius caused some structural changes in neutral solutions, and temperatures above 50-55 degrees Celsius led to significant denaturation of the fibrinogen molecule.
View Article and Find Full Text PDF