A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1075
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3195
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Reliable and Balanced Transfer Learning for Generalized Multimodal Face Anti-Spoofing. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Face Anti-Spoofing (FAS) is essential for securing face recognition systems against presentation attacks. Recent advances in sensor technology and multimodal learning have enabled the development of multimodal FAS systems. However, existing methods often struggle to generalize to unseen attacks and diverse environments due to two key challenges: (1) Modality unreliability, where sensors such as depth and infrared suffer from severe domain shifts, impairing the reliability of cross-modal fusion; and (2) Modality imbalance, where over-reliance on a dominant modality weakens the model's robustness against attacks that affect other modalities. To overcome these issues, we propose MMDG++, a multimodal domain-generalized FAS framework built upon the vision-language model CLIP. In MMDG++, we design the Uncertainty-Guided Cross-Adapter++ (U-Adapter++) to filter out unreliable regions within each modality, enabling more reliable multimodal interactions. Additionally, we introduce Rebalanced Modality Gradient Modulation (ReGrad) for adaptive gradient modulation to balance modality convergence. To further enhance generalization, propose Asymmetric Domain Prompts (ADPs) that leverage CLIP's language priors to learn generalized decision boundaries across modalities. We also develop a novel multimodal FAS benchmark to evaluate generalizability under various deployment conditions. Extensive experiments across this benchmark show our method outperforms state-of-the-art FAS methods, demonstrating superior generalization capability.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2025.3573785DOI Listing

Publication Analysis

Top Keywords

face anti-spoofing
8
multimodal fas
8
gradient modulation
8
multimodal
6
modality
6
fas
5
reliable balanced
4
balanced transfer
4
transfer learning
4
learning generalized
4

Similar Publications