98%
921
2 minutes
20
The rapid growth of biomedical literature poses challenges for manual knowledge curation and synthesis. Biomedical Natural Language Processing (BioNLP) automates the process. While Large Language Models (LLMs) have shown promise in general domains, their effectiveness in BioNLP tasks remains unclear due to limited benchmarks and practical guidelines. We perform a systematic evaluation of four LLMs-GPT and LLaMA representatives-on 12 BioNLP benchmarks across six applications. We compare their zero-shot, few-shot, and fine-tuning performance with the traditional fine-tuning of BERT or BART models. We examine inconsistencies, missing information, hallucinations, and perform cost analysis. Here, we show that traditional fine-tuning outperforms zero- or few-shot LLMs in most tasks. However, closed-source LLMs like GPT-4 excel in reasoning-related tasks such as medical question answering. Open-source LLMs still require fine-tuning to close performance gaps. We find issues like missing information and hallucinations in LLM outputs. These results offer practical insights for applying LLMs in BioNLP.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11972378 | PMC |
http://dx.doi.org/10.1038/s41467-025-56989-2 | DOI Listing |
PLoS One
September 2025
Centre for Experimental Pathogen Host Research, School of Medicine, University College Dublin, Dublin, Ireland.
Background: Acute viral respiratory infections (AVRIs) rank among the most common causes of hospitalisation worldwide, imposing significant healthcare burdens and driving the development of pharmacological treatments. However, inconsistent outcome reporting across clinical trials limits evidence synthesis and its translation into clinical practice. A core outcome set (COS) for pharmacological treatments in hospitalised adults with AVRIs is essential to standardise trial outcomes and improve research comparability.
View Article and Find Full Text PDFIEEE Comput Graph Appl
September 2025
Autonomous agents powered by Large Language Models are transforming AI, creating an imperative for the visualization area. However, our field's focus on a human in the sensemaking loop raises critical questions about autonomy, delegation, and coordination for such agentic visualization that preserve human agency while amplifying analytical capabilities. This paper addresses these questions by reinterpreting existing visualization systems with semi-automated or fully automatic AI components through an agentic lens.
View Article and Find Full Text PDFDrug Saf
September 2025
The MITRE Corporation, 202 Burlington Rd, Bedford, MA, 01730, USA.
Acta Neurochir (Wien)
September 2025
Department of Neurosurgery, Istinye University, Istanbul, Turkey.
Background: Recent studies suggest that large language models (LLMs) such as ChatGPT are useful tools for medical students or residents when preparing for examinations. These studies, especially those conducted with multiple-choice questions, emphasize that the level of knowledge and response consistency of the LLMs are generally acceptable; however, further optimization is needed in areas such as case discussion, interpretation, and language proficiency. Therefore, this study aimed to evaluate the performance of six distinct LLMs for Turkish and English neurosurgery multiple-choice questions and assess their accuracy and consistency in a specialized medical context.
View Article and Find Full Text PDF