98%
921
2 minutes
20
Face Anti-Spoofing (FAS) is constantly challenged by new attack types and mediums, and thus it is crucial for a FAS model to not only mitigate Catastrophic Forgetting (CF) of previously learned spoofing knowledge on the training data during continual learning but also enhance the model's generalization ability to potential spoofing attacks. In this paper, we first highlight that current strategies for catastrophic forgetting are not well-suited to the imperceptible nature of spoofing information in FAS and lack the focus on improving generalization capability. Then, the instance-wise dynamic central difference convolutional adapter module with the weighted ensemble strategy for Vision Transformer (ViT) is proposed for efficiently fine-tuning with low-shot data by extracting generalized spoofing texture information. Furthermore, we find that catastrophic forgetting in FAS can be reflected through the inconsistent attention matrices of ViT between different continual sessions, as the attention matrices embody relationships of spoofing clues between different patch tokens. Hence, we introduce attention consistency regularization by learning and reusing attention matrices to alleviate catastrophic forgetting. Finally, we devise new protocols and conduct extensive experiments to validate the superior performance of alleviating catastrophic forgetting and generalization on unseen domains. The code and protocol files are released on https://github.com/RizhaoCai/DCL-FAS-ICCV2023.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2025.3601053 | DOI Listing |
Front Comput Neurosci
August 2025
Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States.
Artificial neural networks are limited in the number of patterns that they can store and accurately recall, with capacity constraints arising from factors such as network size, architectural structure, pattern sparsity, and pattern dissimilarity. Exceeding these limits leads to recall errors, eventually leading to catastrophic forgetting, which is a major challenge in continual learning. In this study, we characterize the theoretical maximum memory capacity of single-layer feedforward networks as a function of these parameters.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2025
In industrial scenarios, semantic segmentation of surface defects is vital for identifying, localizing, and delineating defects. However, new defect types constantly emerge with product iterations or process updates. Existing defect segmentation models lack incremental learning capabilities, and direct fine-tuning (FT) often leads to catastrophic forgetting.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2025
Class incremental learning (CIL) offers a promising framework for continuous fault diagnosis (CFD), allowing networks to accumulate knowledge from streaming industrial data and recognize new fault classes. However, current CIL methods assume a balanced data stream, which does not align with the long-tail distribution of fault classes in real industrial scenarios. To fill this gap, this article investigates the impact of long-tail bias in the data stream on the CIL training process through the experimental analysis.
View Article and Find Full Text PDFCurr Biol
August 2025
Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer Street, Providence, RI 02912, USA.
Across various types of learning and memory, when a new training session follows a previous one after a certain temporal interval, the previously acquired learning can be disrupted-an effect known as retrograde interference (RI) or catastrophic forgetting. This disruption is thought to result from disrupting interactions between the learning of the first-trained task and the learning of the second-trained task while the former has not yet stabilized. Such destructive interactions have been considered characteristic not only of RI but also of related phenomena.
View Article and Find Full Text PDFNeural Netw
August 2025
Department of Statistical Sciences, University of Toronto, Ontario, Canada; Department of Computer Science, University of Toronto, Ontario, Canada; Department of Statistics and Data Science, MBZUAI, Abu Dhabi, UAE. Electronic address:
Mitigating catastrophic forgetting remains a fundamental challenge in incremental learning. This paper identifies a key limitation of the widely used softmax cross-entropy loss: the non-identifiability inherent in the standard softmax cross-entropy distillation loss. To address this issue, we propose two complementary strategies: (1) adopting an imbalance-invariant distillation loss to mitigate the adverse effect of imbalanced weights during distillation, and (2) regularizing the original prediction/distillation loss with shift-sensitive alternatives, which render the optimization problem identifiable and proactively prevent imbalance from arising.
View Article and Find Full Text PDF