Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Learning intrinsic bias from limited data has been considered the main reason for the failure of deepfake detection with generalizability. Apart from the discovered content and specific-forgery bias, we reveal a novel spatial bias, where detectors inertly anticipate observing structural forgery clues appearing at the image center, also can lead to the poor generalization of existing methods. We present ED4, a simple and effective strategy, to address aforementioned biases explicitly at the data level in a unified framework rather than implicit disentanglement via network design. In particular, we develop ClockMix to produce facial structure preserved mixtures with arbitrary samples, which allows the detector to learn from an exponentially extended data distribution with much more diverse identities, backgrounds, local manipulation traces, and the co-occurrence of multiple forgery artifacts. We further propose the Adversarial Spatial Consistency Module (AdvSCM) to prevent extracting features with spatial bias, which adversarially generates spatial-inconsistent images and constrains their extracted feature to be consistent. As a model-agnostic debiasing strategy, ED4 is plug-and-play: it can be integrated with various deepfake detectors to obtain significant benefits. We conduct extensive experiments to demonstrate its effectiveness and superiority over existing deepfake detection approaches. Code is available at https://github.com/beautyremain/ED4.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2025.3588323DOI Listing

Publication Analysis

Top Keywords

deepfake detection
12
spatial bias
8
explicit data-level
4
data-level debiasing
4
deepfake
4
debiasing deepfake
4
detection learning
4
learning intrinsic
4
bias
4
intrinsic bias
4

Similar Publications

Deepfakes pose critical threats to digital media integrity and societal trust. This paper presents a hybrid deepfake detection framework combining Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) to address challenges in scalability, generalizability, and adversarial robustness. The framework integrates adversarial training, a temporal decay analysis model, and multimodal detection across audio, video, and text domains.

View Article and Find Full Text PDF

In recent years, the rapid advancement of deep learning techniques has significantly propelled the development of face forgery methods, drawing considerable attention to face forgery detection. However, existing detection methods still struggle with generalization across different datasets and forgery techniques. In this work, we address this challenge by leveraging both local texture cues and global frequency domain information in a complementary manner to enhance the robustness of face forgery detection.

View Article and Find Full Text PDF

The rapid development of deepfake techniques poses a serious threat to multimedia authenticity, driving increased attention to deepfake detection. However, most existing methods focus solely on classification while overlooking forgery localization, which is essential for understanding manipulation intent. To address this issue, we propose a novel Hierarchical Spectral-Feature Fusion Network (HSFF-Net) for deepfake detection and localization from spatial- and frequency-domain views.

View Article and Find Full Text PDF

Data synthesis methods have shown promising results in general deepfake detection tasks. This is attributed to the inherent blending process in deepfake creation, which leaves behind distinct synthetic artifacts. However, the existence of content-irrelevant artifacts has not been explicitly explored in the deepfake synthesis.

View Article and Find Full Text PDF

With the growth of social media, people are sharing more content than ever, including X posts that reflect a variety of emotions and opinions. AI-generated synthetic text, known as deepfake text, is used to imitate human writing to disseminate misleading information and fake news. However, as deepfake technology continues to grow, it becomes harder to accurately understand people's opinions on deepfake posts.

View Article and Find Full Text PDF