98%
921
2 minutes
20
Background: The COVID-19 pandemic has been accompanied by an "infodemic," where the rapid spread of misinformation has exacerbated public health challenges. Traditional fact-checking methods, though effective, are time-consuming and resource-intensive, limiting their ability to combat misinformation at scale. Large language models (LLMs) such as GPT-4 offer a more scalable solution, but their susceptibility to generating hallucinations-plausible yet incorrect information-compromises their reliability.
Objective: This study aims to enhance the accuracy and reliability of COVID-19 fact-checking by integrating a retrieval-augmented generation (RAG) system with LLMs, specifically addressing the limitations of hallucination and context inaccuracy inherent in stand-alone LLMs.
Methods: We constructed a context dataset comprising approximately 130,000 peer-reviewed papers related to COVID-19 from PubMed and Scopus. This dataset was integrated with GPT-4 to develop multiple RAG-enhanced models: the naïve RAG, Lord of the Retrievers (LOTR)-RAG, corrective RAG (CRAG), and self-RAG (SRAG). The RAG systems were designed to retrieve relevant external information, which was then embedded and indexed in a vector store for similarity searches. One real-world dataset and one synthesized dataset, each containing 500 claims, were used to evaluate the performance of these models. Each model's accuracy, F-score, precision, and sensitivity were compared to assess their effectiveness in reducing hallucination and improving fact-checking accuracy.
Results: The baseline GPT-4 model achieved an accuracy of 0.856 on the real-world dataset. The naïve RAG model improved this to 0.946, while the LOTR-RAG model further increased accuracy to 0.951. The CRAG and SRAG models outperformed all others, achieving accuracies of 0.972 and 0.973, respectively. The baseline GPT-4 model reached an accuracy of 0.960 on the synthesized dataset. The naïve RAG model increased this to 0.972, and the LOTR-RAG, CRAG, and SRAG models achieved an accuracy of 0.978. These findings demonstrate that the RAG-enhanced models consistently maintained high accuracy levels, closely mirroring ground-truth labels and significantly reducing hallucinations. The CRAG and SRAG models also provided more detailed and contextually accurate explanations, further establishing the superiority of agentic RAG frameworks in delivering reliable and precise fact-checking outputs across diverse datasets.
Conclusions: The integration of RAG systems with LLMs substantially improves the accuracy and contextual relevance of automated fact-checking. By reducing hallucinations and enhancing transparency by citing retrieved sources, this method holds significant promise for rapid, reliable information verification to combat misinformation during public health crises.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12079058 | PMC |
http://dx.doi.org/10.2196/66098 | DOI Listing |
Contact Dermatitis
September 2025
Department of Chemical Engineering, School of Engineering, Monash University Malaysia, Bandar Sunway, Selangor Darul Ehsan, Malaysia.
Extended glove usage is crucial in various occupational settings to safeguard workers and maintain hygiene standards. However, prolonged wear creates an occlusive environment that disrupts normal skin evaporation, leading to temporary overhydration. This reversal of the diffusion gradient facilitates the penetration of residual soaps and alcohol from hand hygiene practices, which can deplete skin moisture and cause irritation.
View Article and Find Full Text PDFProc IEEE Int Conf Big Data
December 2024
Dept. of Computer Science and Engineering, Mississippi State University Potentia Analytics Inc.; Dave C. Swalm School of Chemical Engineering, Mississippi State University.
This paper presents ClinicSum, a novel framework designed to automatically generate clinical summaries from patient-doctor conversations. It utilizes a two-module architecture: a retrieval-based filtering module that extracts Subjective, Objective, Assessment, and Plan (SOAP) information from conversation transcripts, and an inference module powered by fine-tuned Pre-trained Language Models (PLMs), which leverage the extracted SOAP data to generate abstracted clinical summaries. To fine-tune the PLM, we created a training dataset of consisting 1,473 conversations-summaries pair by consolidating two publicly available datasets, FigShare and MTS-Dialog, with ground truth summaries validated by Subject Matter Experts (SMEs).
View Article and Find Full Text PDFJCO Clin Cancer Inform
September 2025
Department of Applied AI and Data Science, City of Hope, Duarte, CA.
Purpose: The recent advancements of retrieval-augmented generation (RAG) and large language models (LLMs) have revolutionized the extraction of real-world evidence from unstructured electronic health records (EHRs) in oncology. This study aims to enhance RAG's effectiveness by implementing a retriever encoder specifically designed for oncology EHRs, with the goal of improving the precision and relevance of retrieved clinical notes for oncology-related queries.
Methods: Our model was pretrained with more than six million oncology notes from 209,135 patients at City of Hope.
Euro Surveill
September 2025
Crisis Preparedness and Response, Sciensano, Brussels, Belgium.
Following the experience gained during the COVID-19 pandemic, the Belgian Risk Assessment Group (RAG) developed the Respi-Radar in the summer of 2023 to assess the epidemiological situation of respiratory infections and inform public health preparedness and response in Belgium. The Respi-Radar consists of four risk levels (green, yellow, orange and red), which indicate the extent of viral circulation and/or pressure on the healthcare system. Based on these risk levels, authorities can apply adequate measures depending on the epidemiological trends.
View Article and Find Full Text PDFBiomedical named entity recognition (NER) is a high-utility natural language processing (NLP) task, and large language models (LLMs) show promise particularly in few-shot settings (i.e., limited training data).
View Article and Find Full Text PDF