98%
921
2 minutes
20
Importance: The study highlights the potential of large language models, specifically GPT-3.5 and GPT-4, in processing complex clinical data and extracting meaningful information with minimal training data. By developing and refining prompt-based strategies, we can significantly enhance the models' performance, making them viable tools for clinical NER tasks and possibly reducing the reliance on extensive annotated datasets.
Objectives: This study quantifies the capabilities of GPT-3.5 and GPT-4 for clinical named entity recognition (NER) tasks and proposes task-specific prompts to improve their performance.
Materials And Methods: We evaluated these models on 2 clinical NER tasks: (1) to extract medical problems, treatments, and tests from clinical notes in the MTSamples corpus, following the 2010 i2b2 concept extraction shared task, and (2) to identify nervous system disorder-related adverse events from safety reports in the vaccine adverse event reporting system (VAERS). To improve the GPT models' performance, we developed a clinical task-specific prompt framework that includes (1) baseline prompts with task description and format specification, (2) annotation guideline-based prompts, (3) error analysis-based instructions, and (4) annotated samples for few-shot learning. We assessed each prompt's effectiveness and compared the models to BioClinicalBERT.
Results: Using baseline prompts, GPT-3.5 and GPT-4 achieved relaxed F1 scores of 0.634, 0.804 for MTSamples and 0.301, 0.593 for VAERS. Additional prompt components consistently improved model performance. When all 4 components were used, GPT-3.5 and GPT-4 achieved relaxed F1 socres of 0.794, 0.861 for MTSamples and 0.676, 0.736 for VAERS, demonstrating the effectiveness of our prompt framework. Although these results trail BioClinicalBERT (F1 of 0.901 for the MTSamples dataset and 0.802 for the VAERS), it is very promising considering few training samples are needed.
Discussion: The study's findings suggest a promising direction in leveraging LLMs for clinical NER tasks. However, while the performance of GPT models improved with task-specific prompts, there's a need for further development and refinement. LLMs like GPT-4 show potential in achieving close performance to state-of-the-art models like BioClinicalBERT, but they still require careful prompt engineering and understanding of task-specific knowledge. The study also underscores the importance of evaluation schemas that accurately reflect the capabilities and performance of LLMs in clinical settings.
Conclusion: While direct application of GPT models to clinical NER tasks falls short of optimal performance, our task-specific prompt framework, incorporating medical knowledge and training samples, significantly enhances GPT models' feasibility for potential clinical applications.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11339492 | PMC |
http://dx.doi.org/10.1093/jamia/ocad259 | DOI Listing |
Comput Biol Med
August 2025
School of Medical, Indigenous and Health Sciences, University of Wollongong, Wollongong, Australia.
Despite rapid healthcare digitization, extracting information from unstructured electronic health records (EHRs), such as nursing notes, remains challenging due to inconsistencies and ambiguities in clinical documentation. Generative large language models (LLMs) have emerged as promising tools for automating information extraction (IE); however, their application in real-world clinical settings, such as residential aged care (RAC), is limited by critical gaps. Prior studies have often focused on structured EHR data and conventional evaluation metrics such as accuracy and F1 score, overlooking critical aspects like robustness, fairness, bias, and contextual relevance, particularly in unstructured clinical narratives.
View Article and Find Full Text PDFSci Rep
August 2025
Department of Artificial Intelligence and Data Science, College of Computer Science and Engineering, University of Hail, Hail, Saudi Arabia.
Named entity recognition (NER) is a significant natural language processing task (NLP) in several applications, including question-answering and data retrieval. NER's primary goal is to classify, discover, and extract named entities into predetermined classes: location, person, and organization. Arabic NER is a tedious process due to its unique characteristics and complexity.
View Article and Find Full Text PDFJ Biomed Inform
August 2025
School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, No. 96, JinZhai Road Baohe District, Hefei, 230026, Anhui, China; Suzhou Institute for Advanced Research, University of Science and Technology of China, No. 166, Renai Road, Suzho
Objective: Large language models (LLMs) have exhibited remarkable efficacy in natural language processing (NLP) tasks, with fine-tuning for Biomedical Named Entity Recognition (BioNER) receiving significant research attention. However, the substantial computational demands associated with fine-tuning large-scale models constrain their development and deployment. Consequently, this study investigates parameter-efficient fine-tuning (PEFT) techniques to optimize LLMs for BioNER under limited computational resources.
View Article and Find Full Text PDFData Brief
October 2025
Systems Engineering and Applications Laboratory, Cadi Ayyad University, ENSA, BP 2390, Marrakech, 40000, Marrakech-Safi, Morocco.
Automating code generation in manufacturing systems requires Artificial Intelligence (AI) models capable of interpreting textual requirement specifications. One of the main challenges is the absence of publicly available, domain-specific datasets suitable for training such models. This article presents AutoFactory, an open-source dataset that includes manually written and LLM-augmented requirement specifications, annotated by domain experts for Named Entity Recognition (NER) tasks using the BIO format.
View Article and Find Full Text PDFStud Health Technol Inform
August 2025
Truveta.
Information extraction tasks, such as Named Entity Recognition (NER) and Relation Extraction (RE), are essential for advancing clinical research and applications. However, these tasks are hindered by the scarcity of labeled clinical documents due to privacy concerns and high annotation costs. This study introduces a novel framework combining Large Language Models (LLMs) for data augmentation with an adapted BERT model for clinical information extraction.
View Article and Find Full Text PDF