Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Artificial intelligence (AI), particularly machine learning (ML), is increasingly influencing pharmacovigilance (PV) by improving case triage and signal detection. Several studies have reported encouraging performance, with high F1 scores and alignment with expert assessments, suggesting that AI tools can help prioritize reports and identify potential safety issues faster than manual review. However, integrating these tools into PV raises concerns. Most models are designed for prediction, not explanation, and operate as "black boxes," offering limited insight into how decisions are made. This lack of transparency may undermine trust and clinical utility, especially in a domain where causality is central. Traditional ML relies on correlational patterns and may amplify biases inherent in spontaneous reporting systems, such as under-reporting, missing data, and confounding. Recent developments in explainable AI (XAI) and causal AI aim to address these issues by offering more interpretable and causally meaningful outputs, but their use in PV remains limited. These methods face challenges, including the need for robust data, the difficulty of defining ground truth for adverse drug reactions (ADRs), and the lack of standard validation frameworks. In this commentary, we explore the promise and pitfalls of AI in PV and argue for a shift toward causally informed, interpretable models grounded in epidemiological reasoning. We identify four priorities: incorporating causal inference into AI workflows; developing benchmark datasets to support transparent evaluation; ensuring model outputs align with clinical and regulatory logic; and upholding rigorous validation standards. The goal is not to replace expert judgment, but to enhance it with tools that are more transparent, reliable, and capable of separating true signals from noise. Moving toward explainable and causally robust AI is essential to ensure that its application in pharmacovigilance is both scientifically credible and ethically sound.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11096-025-02004-zDOI Listing

Publication Analysis

Top Keywords

artificial intelligence
8
black boxes
4
boxes explainable
4
explainable causal
4
causal artificial
4
intelligence separate
4
separate signal
4
signal noise
4
noise pharmacovigilance
4
pharmacovigilance artificial
4

Similar Publications

Nuclear receptors (NRs) are a superfamily of ligand-activated transcription factors that regulate gene expression in response to metabolic, hormonal, and environmental signals. These receptors play a critical role in metabolic homeostasis, inflammation, immune function, and disease pathogenesis, positioning them as key therapeutic targets. This review explores the mechanistic roles of NRs such as PPARs, FXR, LXR, and thyroid hormone receptors (THRs) in regulating lipid and glucose metabolism, energy expenditure, cardiovascular health, and neurodegeneration.

View Article and Find Full Text PDF

Aim: The purpose of this study was to assess the accuracy of a customized deep learning model based on CNN and U-Net for detecting and segmenting the second mesiobuccal canal (MB2) of maxillary first molar teeth on cone beam computed tomography (CBCT) scans.

Methodology: CBCT scans of 37 patients were imported into 3D slicer software to crop and segment the canals of the mesiobuccal (MB) root of the maxillary first molar. The annotated data were divided into two groups: 80% for training and validation and 20% for testing.

View Article and Find Full Text PDF

Obsessive-compulsive disorder (OCD) is a chronic and disabling condition affecting approximately 3.5% of the global population, with diagnosis on average delayed by 7.1 years or often confounded with other psychiatric disorders.

View Article and Find Full Text PDF

Use of artificial intelligence for classification of fractures around the elbow in adults according to the 2018 AO/OTA classification system.

BMC Musculoskelet Disord

September 2025

Department of Clinical Sciences at Danderyds Hospital, Department of Orthopedic Surgery, Karolinska Institutet, Stockholm, 182 88, Sweden.

Background: This study evaluates the accuracy of an Artificial Intelligence (AI) system, specifically a convolutional neural network (CNN), in classifying elbow fractures using the detailed 2018 AO/OTA fracture classification system.

Methods: A retrospective analysis of 5,367 radiograph exams visualizing the elbow from adult patients (2002-2016) was conducted using a deep neural network. Radiographs were manually categorized according to the 2018 AO/OTA system by orthopedic surgeons.

View Article and Find Full Text PDF

Purpose: The study aims to compare the treatment recommendations generated by four leading large language models (LLMs) with those from 21 sarcoma centers' multidisciplinary tumor boards (MTBs) of the sarcoma ring trial in managing complex soft tissue sarcoma (STS) cases.

Methods: We simulated STS-MTBs using four LLMs-Llama 3.2-vison: 90b, Claude 3.

View Article and Find Full Text PDF