A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Prompt injection attacks on vision-language models for surgical decision support. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Importance: Artificial Intelligence-driven analysis of laparoscopic video holds potential to increase the safety and precision of minimally invasive surgery. Vision-language models are particularly promising for video-based surgical decision support due to their capabilities to comprehend complex temporospatial (video) data. However, the same multimodal interfaces that enable such capabilities also introduce new vulnerabilities to manipulations through embedded deceptive text or images (prompt injection attacks).

Objective: To systematically evaluate how susceptible state-of-the-art video-capable vision-language models are to textual and visual prompt injection attacks in the context of clinically relevant surgical decision support tasks.

Design Setting And Participants: In this observational study, we systematically evaluated four state-of-the-art vision-language models, Gemini 1.5 Pro, Gemini 2.5 Pro, GPT-o4-mini-high, and Qwen 2.5-VL, across eleven surgical decision support tasks: detection of bleeding events, foreign objects, image distortions, critical view of safety assessment, and surgical skill assessment. Prompt injection scenarios involved misleading textual prompts and visual perturbations, displayed as white text overlay, applied at varying durations.

Main Outcomes And Measures: The primary measure was model accuracy, contrasted between baseline performance and each prompt injection condition.

Results: All vision-language models demonstrated good baseline accuracy, with Gemini 2.5 Pro generally achieving the highest mean [standard deviation] accuracy across all tasks (0.82 [0.01]), compared to Gemini 1.5 Pro (0.70 [0.03]) and GPT-o4 mini-high (0.67 [0.06]). Across tasks, Qwen 2.5-VL censored most outputs and achieved an accuracy of (0.58 [0.03]) on non-censored outputs. Textual and temporally-varying visual prompt injections reduced the accuracy for all models. Prolonged visual prompt injections were generally more harmful than single-frame injections. Gemini 2.5 Pro showed the greatest robustness and maintained stable performance for several tasks despite prompt injections, whereas GPT-o4-mini-high exhibited the highest vulnerability, with mean (standard deviation) accuracy across all tasks declining from 0.67 (0.06) at baseline to 0.24 (0.04) under full-duration visual prompt injection ( < .001).

Conclusion And Relevance: These findings indicate the critical need for robust temporal reasoning capabilities and specialized guardrails before vision-language models can be safely deployed for real-time surgical decision support.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12330433PMC
http://dx.doi.org/10.1101/2025.07.16.25331645DOI Listing

Publication Analysis

Top Keywords

prompt injection
24
vision-language models
24
surgical decision
20
decision support
20
gemini pro
20
visual prompt
16
prompt injections
12
prompt
9
injection attacks
8
qwen 25-vl
8

Similar Publications