A word of caution on the concentration of rifampin for endografts.

J Vasc Surg

Division of Vascular Surgery, Department of Surgery, Emory University School of Medicine, Atlanta, Ga.

Published: September 2019


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jvs.2019.03.078DOI Listing

Publication Analysis

Top Keywords

word caution
4
caution concentration
4
concentration rifampin
4
rifampin endografts
4
word
1
concentration
1
rifampin
1
endografts
1

Similar Publications

Event Reduction With Obicetrapib: A Word Of Caution.

J Am Coll Cardiol

August 2025

Division of Cardiovascular Disease, University of Alabama, Birmingham, Alabama, USA. Electronic address:

View Article and Find Full Text PDF

: Cognitive impairment is one of the most common and debilitating clinical features of Multiple Sclerosis (MS). Neuropsychological assessment, however, is time-consuming and requires personal resources, so, due to limited resources in daily clinical practice, information on cognitive profiles is often lacking, despite its high prognostic relevance. Time-saving and effective tools are required to bridge this gap.

View Article and Find Full Text PDF

Can DSP Mutation Carriers Safely Participate in Sport Activity?: A Word of Caution.

JACC Clin Electrophysiol

July 2025

Cardiovascular Pathology Unit, Department of Cardiac, Thoracic and Vascular Sciences and Public Health, University of Padua, Padua, Italy. Electronic address:

View Article and Find Full Text PDF

Indocyanine green (ICG) fluorescence imaging has emerged as a potential tool in evaluating biliary atresia, offering real-time visualisation of hepatobiliary excretion. Following intravenous administration, ICG is taken up by hepatocytes and excreted into bile, allowing assessment of biliary patency. In biliary atresia, absent or delayed fluorescence in the intestine may suggest obstruction.

View Article and Find Full Text PDF

Background: Large language models (LLMs) show promise in clinical contexts but can generate false facts (often referred to as "hallucinations"). One subset of these errors arises from adversarial attacks, in which fabricated details embedded in prompts lead the model to produce or elaborate on the false information. We embedded fabricated content in clinical prompts to elicit adversarial hallucination attacks in multiple large language models.

View Article and Find Full Text PDF