98%
921
2 minutes
20
Recent advancements in large language models (LLMs) have significantly enhanced text generation across various sectors; however, their medical application faces critical challenges regarding both accuracy and real-time responsiveness. To address these dual challenges, we propose a novel two-step retrieval and ranking retrieval-augmented generation (RAG) framework that synergistically combines embedding search with Elasticsearch technology. Built upon a dynamically updated medical knowledge base incorporating expert-reviewed documents from leading healthcare institutions, our hybrid architecture employs ColBERTv2 for context-aware result ranking while maintaining computational efficiency. Experimental results show a 10% improvement in accuracy for complex medical queries compared to standalone LLM and single-search RAG variants, while acknowledging that latency challenges remain in emergency situations requiring sub-second responses in an experimental setting, which can be achieved in real-time using more powerful hardware in real-world deployments. This work establishes a new paradigm for reliable medical AI assistants that successfully balances accuracy and practical deployment considerations.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12103550 | PMC |
http://dx.doi.org/10.1038/s41598-025-00724-w | DOI Listing |
Front Digit Health
August 2025
Department of Ophthalmology, Stanford University, Palo Alto, CA, United States.
Introduction: Vision language models (VLMs) combine image analysis capabilities with large language models (LLMs). Because of their multimodal capabilities, VLMs offer a clinical advantage over image classification models for the diagnosis of optic disc swelling by allowing a consideration of clinical context. In this study, we compare the performance of non-specialty-trained VLMs with different prompts in the classification of optic disc swelling on fundus photographs.
View Article and Find Full Text PDFJ Am Coll Emerg Physicians Open
October 2025
Department of Emergency Medicine, University of Michigan, Ann Arbor, Michigan, USA.
Objectives: We assessed time to provider (TTP) for patients with a non-English language preference (NELP) compared to patients with an English language preference (ELP) in the emergency department (ED).
Methods: We conducted a retrospective cohort study of adults presenting between 2019 and 2023 to a large urban ED. We used a 2-step classification that first identified NELP from patients' reported language at registration, followed by identification in the narrative text of the triage note.
Proc Mach Learn Res
November 2024
Pretraining plays a pivotal role in acquiring generalized knowledge from large-scale data, achieving remarkable successes as evidenced by large models in CV and NLP. However, progress in the graph domain remains limited due to fundamental challenges represented by feature heterogeneity and structural heterogeneity. Recent efforts have been made to address feature heterogeneity via Large Language Models (LLMs) on text-attributed graphs (TAGs) by generating fixed-length text representations as node features.
View Article and Find Full Text PDFRev Cardiovasc Med
August 2025
Cardiovascular Surgery Department, Ankara Bilkent City Hospital, 06800 Ankara, Turkey.
Background: This study aimed to investigate the performance of two versions of ChatGPT (o1 and 4o) in making decisions about coronary revascularization and to compare the recommendations of these versions with those of a multidisciplinary Heart Team. Moreover, the study aimed to assess whether the decisions generated by ChatGPT, based on the internal knowledge base of the system and clinical guidelines, align with expert recommendations in real-world coronary artery disease management. Given the increasing prevalence and processing capabilities of large language models, such as ChatGPT, this comparison offers insights into the potential applicability of these systems in complex clinical decision-making.
View Article and Find Full Text PDFFront Big Data
August 2025
MaiNLP, Center for Information and Language Processing, LMU Munich, Munich, Germany.
Predicting career trajectories is a complex yet impactful task, offering significant benefits for personalized career counseling, recruitment optimization, and workforce planning. However, effective career path prediction (CPP) modeling faces challenges including highly variable career trajectories, free-text resume data, and limited publicly available benchmark datasets. In this study, we present a comprehensive comparative evaluation of CPP models-linear projection, multilayer perceptron (MLP), LSTM, and large language models (LLMs)-across multiple input settings and two recently introduced public datasets.
View Article and Find Full Text PDF