Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

The advent of Deep Learning (DL) has significantly propelled the field of diagnostic radiology forward by enhancing image analysis and interpretation. The introduction of the Transformer architecture, followed by the development of Large Language Models (LLMs), has further revolutionized this domain. LLMs now possess the potential to automate and refine the radiology workflow, extending from report generation to assistance in diagnostics and patient care. The integration of multimodal technology with LLMs could potentially leapfrog these applications to unprecedented levels.However, LLMs come with unresolved challenges such as information hallucinations and biases, which can affect clinical reliability. Despite these issues, the legislative and guideline frameworks have yet to catch up with technological advancements. Radiologists must acquire a thorough understanding of these technologies to leverage LLMs' potential to the fullest while maintaining medical safety and ethics. This review aims to aid in that endeavor.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11217134PMC
http://dx.doi.org/10.1007/s11604-024-01552-0DOI Listing

Publication Analysis

Top Keywords

large language
8
language models
8
impact large
4
models radiology
4
radiology guide
4
guide radiologists
4
radiologists latest
4
latest innovations
4
innovations advent
4
advent deep
4

Similar Publications

Applications of Federated Large Language Model for Adverse Drug Reactions Prediction: Scoping Review.

J Med Internet Res

September 2025

Department of Information Systems and Cybersecurity, The University of Texas at San Antonio, 1 UTSA Circle, San Antonio, TX, 78249, United States, 1 (210) 458-6300.

Background: Adverse drug reactions (ADR) present significant challenges in health care, where early prevention is vital for effective treatment and patient safety. Traditional supervised learning methods struggle to address heterogeneous health care data due to their unstructured nature, regulatory constraints, and restricted access to sensitive personal identifiable information.

Objective: This review aims to explore the potential of federated learning (FL) combined with natural language processing and large language models (LLMs) to enhance ADR prediction.

View Article and Find Full Text PDF

Background: Primary liver cancer, particularly hepatocellular carcinoma (HCC), poses significant clinical challenges due to late-stage diagnosis, tumor heterogeneity, and rapidly evolving therapeutic strategies. While systematic reviews and meta-analyses are essential for updating clinical guidelines, their labor-intensive nature limits timely evidence synthesis.

Objective: This study proposes an automated literature screening workflow powered by large language models (LLMs) to accelerate evidence synthesis for HCC treatment guidelines.

View Article and Find Full Text PDF

Purpose: Speech disfluencies are common in individuals who do not stutter, with estimates suggesting a typical rate of six per 100 words. Factors such as language ability, processing load, planning difficulty, and communication strategy influence disfluency. Recent work has indicated that bilinguals may produce more disfluencies than monolinguals, but the factors underlying disfluency in bilingual children are poorly understood.

View Article and Find Full Text PDF

In this paper we analyse gender-based biases in the language within complex legal judgments. Our aims are: (i) to determine the extent to which purported biases discussed in the literature by feminist legal scholars are identifiable from the language of legal judgments themselves, and (ii) to uncover new forms of bias represented in the data that may promote further analysis and interpretation of the functioning of the legal system. We consider a large set of 2530 judgments in family law in Australia over a 20 year period, examining the way that male and female parties to a case are spoken to and about, by male and female judges, in relation to their capacity to provide care for children subject to the decision.

View Article and Find Full Text PDF

Evaluating anti-LGBTQIA+ medical bias in large language models.

PLOS Digit Health

September 2025

Department of Dermatology, Stanford University, Stanford, California, United States of America.

Large Language Models (LLMs) are increasingly deployed in clinical settings for tasks ranging from patient communication to decision support. While these models demonstrate race-based and binary gender biases, anti-LGBTQIA+ bias remains understudied despite documented healthcare disparities affecting these populations. In this work, we evaluated the potential of LLMs to propagate anti-LGBTQIA+ medical bias and misinformation.

View Article and Find Full Text PDF