98%
921
2 minutes
20
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework. By calculating the style loss between the texture and the specified style image, the adversarial texture generated by the model is guided to have good stealthiness and is not easily detected by DNNs and human observers in specific scenes. Experiments have shown that in both the digital and physical worlds, the vehicle full coverage adversarial camouflage texture we create has good stealthiness and can effectively fool advanced DNN object detectors while evading human observers in specific scenes.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11592712 | PMC |
http://dx.doi.org/10.3390/e26110903 | DOI Listing |
Camouflage in nature seems to arise from competition between predator and prey. To survive, predators must find prey, while prey must avoid being found. A simulation model of that adversarial relationship is presented here.
View Article and Find Full Text PDFNeural Netw
June 2025
Department of Computing Technologies, Swinburne University of Technology, Hawthorn, VIC 3122, Australia.
Deep learning models are often vulnerable to adversarial attacks in both digital and physical environments. Particularly challenging are physical attacks that involve subtle, unobtrusive modifications to objects, such as patch-sticking or light-shooting, designed to maliciously alter the model's output when the scene is captured and fed into the model. Developing physical adversarial attacks that are robust, flexible, inconspicuous, and difficult to trace remains a significant challenge.
View Article and Find Full Text PDFJ Imaging
January 2025
Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios.
View Article and Find Full Text PDFEntropy (Basel)
October 2024
The Third Faculty of Xi'an Research Institute of High Technology, Xi'an 710064, China.
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework.
View Article and Find Full Text PDFNeural Netw
September 2023
Indian Institute of Technology, Delhi, India. Electronic address:
Graph Neural Networks (GNNs) are powerful in learning rich network representations that aid the performance of downstream tasks. However, recent studies showed that GNNs are vulnerable to adversarial attacks involving node injection and network perturbation. Among these, node injection attacks are more practical as they do not require manipulation in the existing network and can be performed more realistically.
View Article and Find Full Text PDF