Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Unsupervised domain adaptation (UDA) aims to adapt models learned from a well-annotated source domain to a target domain, where only unlabeled samples are available. To this end, adversarial training is widely used in conventional UDA methods to reduce the discrepancy between source and target domains. Recently, prompt tuning has emerged as an efficient way to adapt large pre-trained vision-language models like CLIP to a variety of downstream tasks. In this paper, we present a novel method named Adversarial DuAl Prompt Tuning (ADAPT) for UDA, which employs text prompts and visual prompts to guide CLIP simultaneously. Rather than simply performing a joint optimization of text prompts and visual prompts, we integrate text prompt tuning and visual prompt tuning into a collaborative framework where they engage in an adversarial game: text prompt tuning focuses on distinguishing between source and target images, whereas visual prompt tuning seeks to align source and target domains. Unlike most existing adversarial training-based UDA approaches, ADAPT does not require explicit domain discriminators for domain alignment. Instead, the objective is effectively achieved at both global and category levels through modeling the joint probability distribution of images on domains and categories. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our ADAPT method for UDA. We have released our code at https://github.com/Liuziyi1999/ADAPT.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2025.3541868DOI Listing

Publication Analysis

Top Keywords

prompt tuning
32
source target
12
adversarial training
8
prompt
8
tuning
8
adversarial dual
8
dual prompt
8
unsupervised domain
8
domain adaptation
8
target domains
8

Similar Publications

Temporal modeling plays an important role in the effective adaption of the powerful pretrained text-image foundation model into text-video retrieval. However, existing methods often rely on additional heavy trainable modules, such as transformer or BiLSTM, which are inefficient. In contrast, we avoid introducing such heavy components by leveraging frozen foundation models.

View Article and Find Full Text PDF

BACKGROUND This study used CT imaging analyzed with deep learning techniques to assess the diagnostic accuracy of lung metastasis detection in patients with breast cancer. The aim of the research was to create and verify a system for detecting malignant and metastatic lung lesions that uses YOLOv10 and transfer learning. MATERIAL AND METHODS From January 2023 to 2024, CT scans of 16 patients with breast cancer who had confirmed lung metastases were gathered retrospectively from Erzincan Mengücek Gazi Training and Research Hospital.

View Article and Find Full Text PDF

Large Language Models (LLMs) show promise in augmenting digital health applications. However, development and scaling of large models face computational constraints, data security concerns and limitations of internet accessibility in some regions. We developed and tested Med-Pal, a medical domain-specific LLM-chatbot fine-tuned with a fine-grained, expert curated medication-enquiry dataset consisting of 1,100 question and answer pairs.

View Article and Find Full Text PDF

Pay more attention to the robustness of LLMs on adversarial prompt for instruction data mining.

Neural Netw

August 2025

National Key Laboratory of Parallel and Distributed Computing, College of Computer Science and Technology, National University of Defense Technology, Hunan Changsha, 410073, China. Electronic address:

Instruction tuning has emerged as a paramount method for tailoring the behaviors of LLMs. Recent studies have unveiled the potential for LLMs to achieve high performance through fine-tuning with a limited quantity of high-quality instruction data. Instruction-Following Difficulty is one of the most representative approaches in instruction data mining, which involves selecting samples where LLMs fail to generate response that align with the provided instructions as the high-quality instruction data.

View Article and Find Full Text PDF

Medical Entity Linking in Low-Resource Settings with Fine-Tuning-Free LLMs.

Stud Health Technol Inform

September 2025

Chair of Medical Informatics, Institute of AI and Informatics in Medicine (AIIM), TUM University Hospital, Technical University of Munich, Munich, Germany.

Introduction: Medical entity linking is an important task in biomedical natural language processing, aiming to align textual mentions of medical concepts with standardized concepts in ontologies. Most existing approaches rely on supervised models or domain-specific embeddings, which require large datasets and significant computational resources.

Objective: The objective of this work is (1) to investigate the effectiveness of large language models (LLMs) in improving both candidate generation and disambiguation for medical entity linking through synonym expansion and in-context learning, and (2) to evaluate this approach against traditional string-matching and supervised methods.

View Article and Find Full Text PDF