98%
921
2 minutes
20
Backdoor attacks present a significant threat to the reliability of machine learning models, including Graph Neural Networks (GNNs), by embedding triggers that manipulate model behavior. While many existing defenses focus on identifying these vulnerabilities, few address restoring model accuracy after an attack. This paper introduces a method for restoring the original accuracy of GNNs affected by backdoor attacks, a task complicated by the complex structure of graph data. Our approach combines advanced filtering and augmentation techniques that enhance the GNN's resilience against hidden triggers. The filtering mechanisms remove suspicious data points to minimize the influence of poisoned inputs, while augmentation introduces controlled variation to strengthen the model against backdoor triggers. To optimize restoration, we present an adaptive framework that adjusts the balance between filtering and augmentation based on model sensitivity and attack severity, reducing both false positives and negatives. Additionally, we incorporate Explainable AI (XAI) techniques to improve the interpretability of the model's decision-making process, enabling transparent detection and understanding of backdoor triggers. Results demonstrate that our method achieves an average accuracy restoration of 97-99 % across various backdoor attack scenarios, providing an effective solution to maintain the performance and integrity of GNNs in sensitive applications.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2025.107990 | DOI Listing |
Neural Netw
August 2025
School of Computer Science and Information Technology, Institute of Management Sciences, Peshawar, Pakistan.
Backdoor attacks present a significant threat to the reliability of machine learning models, including Graph Neural Networks (GNNs), by embedding triggers that manipulate model behavior. While many existing defenses focus on identifying these vulnerabilities, few address restoring model accuracy after an attack. This paper introduces a method for restoring the original accuracy of GNNs affected by backdoor attacks, a task complicated by the complex structure of graph data.
View Article and Find Full Text PDFNeural Netw
November 2025
Purple Mountain Laboratories, Nanjing, China. Electronic address:
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. While various defense models have been proposed, they often fail to account for the variability in both data and attacks, limiting their effectiveness in dynamic environments. Therefore, we propose DERG, a dynamic ensemble learning model for robust GNNs, which leverages multiple graph data and dynamically changing submodels for defense.
View Article and Find Full Text PDFNeural Netw
October 2025
College of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China. Electronic address:
Graph backdoor attacks can significantly degrade the performance of graph neural networks (GNNs). Specifically, during the training phase, graph backdoor attacks inject triggers and target class labels into poisoned nodes to create a backdoored GNN. During the testing phase, triggers are added to target nodes, causing them to be misclassified as the target class.
View Article and Find Full Text PDFMed Biol Eng Comput
July 2025
Lanzhou Petrochemical General Hospital, Lanzhou, China.
Breast cancer image classification remains a challenging task due to the high-resolution nature of pathological images and their complex feature distributions. Graph neural networks (GNNs) offer promising capabilities to capture local structural information but often suffer from limited generalization and reliance on shortcut features. This study proposes a novel causal discovery attention-based graph neural network (CDA-GNN) model.
View Article and Find Full Text PDFNeural Netw
April 2024
College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China. Electronic address:
Graph Neural Networks (GNNs) are often viewed as black boxes due to their lack of transparency, which hinders their application in critical fields. Many explanation methods have been proposed to address the interpretability issue of GNNs. These explanation methods reveal explanatory information about graphs from different perspectives.
View Article and Find Full Text PDF