98%
921
2 minutes
20
Deepfake technology, which encompasses various video manipulation techniques implemented through deep learning algorithms-such as face swapping and expression alteration-has advanced to generate fake videos that are increasingly difficult for human observers to detect, posing significant threats to societal security. Existing methods for detecting deepfake videos aim to identify such manipulated content to effectively prevent the spread of misinformation. However, these methods often suffer from limited generalization capabilities, exhibiting poor performance when detecting fake videos outside of their training datasets. Moreover, research on the precise localization of manipulated regions within deepfake videos is limited, primarily due to the absence of datasets with fine-grained annotations that specify which regions have been manipulated.To address these challenges, this paper proposes a novel spatial-based training method that does not require fake samples to detect spatial manipulations in deepfake videos. By employing a technique that combines multi-part local displacement deformation and fusion, we generate more diverse deepfake feature data, enhancing the detection accuracy of specific manipulation methods while producing mixed-region labels to guide manipulation localization. We utilize the Swin-Unet model for manipulation localization detection, incorporating classification loss functions, local difference loss functions, and manipulation localization loss functions to effectively improve the precision of localization and detection.Experimental results demonstrate that the proposed spatial-based training method without fake samples effectively simulates the features present in real datasets. Our method achieves satisfactory detection accuracy on datasets such as FF++, Celeb-DF, and DFDC, while accurately localizing the manipulated regions. These findings indicate the effectiveness of the proposed self-blending method and model in deepfake video detection and manipulation localization.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11785974 | PMC |
http://dx.doi.org/10.1038/s41598-025-88523-1 | DOI Listing |
J Interpers Violence
September 2025
Goldsmiths, University of London, London, United Kingdom.
Advances in digital technologies provide new opportunities for harm, including sexualized deepfake abuse-the non-consensual creation, distribution, or threat to create/distribute an image or video of another person that had been altered in a nude or sexual way. Since 2017, there has been a proliferation of shared open-source technologies to facilitate deepfake creation and dissemination, and a corresponding increase in cases of sexualized deepfake abuse. There is a substantive risk that the increased accessibility of easy-to-use tools, the normalization of non-consensually sexualizing others, and the minimization of harms experienced by those who have their images created and/or shared may impact prevention and response efforts.
View Article and Find Full Text PDFJ Vis Exp
August 2025
Department of Computer Science and Engineering, Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad, India.
Deepfakes pose critical threats to digital media integrity and societal trust. This paper presents a hybrid deepfake detection framework combining Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) to address challenges in scalability, generalizability, and adversarial robustness. The framework integrates adversarial training, a temporal decay analysis model, and multimodal detection across audio, video, and text domains.
View Article and Find Full Text PDFIntern Med J
August 2025
Baker Heart and Diabetes Institute, Melbourne, Victoria, Australia.
False advertising is a long-term problem, but, with modern technology, it is now possible to create rapidly and inexpensively and widely distribute fake video, audio and written social media presentations attributed to people without their knowledge or permission. Such 'deepfake' videos and other promotional material, allegedly by healthcare and biomedical research experts, are increasingly being used, often encouraging the purchase of a product that is not scientifically validated, and with which the expert is not associated. We describe some of our recent experiences and suggest potential actions.
View Article and Find Full Text PDFNeural Netw
July 2025
State Key Laboratory of Integrated Services Networks, School of Electronic Engineering, Xidian University, Xi'an, 710071, Shaanxi, PR China. Electronic address:
With the rapid advancement of artificial intelligence, Deepfake technology, which involves the synthesis of highly realistic face-swapping images and videos, has garnered significant attention. While this technology has various legitimate applications, its misuse in political manipulation, identity fraud, and misinformation poses serious societal risks. Consequently, effective face forgery detection methods are crucial.
View Article and Find Full Text PDFJ Imaging
July 2025
Centre for Research and Technology Hellas, 57001 Thessaloniki, Greece.
Deepfake detection has become a critical issue due to the rise of synthetic media and its potential for misuse. In this paper, we propose a novel approach to deepfake detection by combining video frame analysis with facial microexpression features. The dual-branch fusion model utilizes a 3D ResNet18 for spatiotemporal feature extraction and a transformer model to capture microexpression patterns, which are difficult to replicate in manipulated content.
View Article and Find Full Text PDF