Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Traditional machine learning (ML) relies on a centralized architecture, which makes it unsuitable for applications where data privacy is critical. Federated Learning (FL) addresses this issue by allowing multiple parties to collaboratively train models without sharing their raw data. However, FL is susceptible to data and model poisoning attacks that can severely disrupt the learning process. Existing literature indicates that defense mechanisms predominantly analyze client updates on the server side, often without requiring or involving client cooperation. This paper proposes a novel defense mechanism, SpyShield, that leverages client cooperation to identify malicious clients in data and model poisoning attacks. SpyShield is inspired by tactics used in the social deduction game Spyfall, where the majority of players must detect the deception of a minority, a dynamic aligning with the challenges posed by poisoning attacks in ML. In this paper, we evaluate four different configurations of SpyShield's robustness and performance on the FashionMNIST dataset against five benchmark aggregation algorithms-FedAvg, Krum, Multi-Krum, Median, and Trimmed Mean-under three attack types: (A) Cyclic Label Flipping, (B) Random Label Flipping, and (C) Random Weight Attacks. Each attack is tested across three scenarios: (I) 3 malicious clients out of 30, (II) 10 out of 50, and (III) 40 out of 100, totaling nine experimental settings. These settings simulate varying attack intensities, allowing the assessment of SpyShield's effectiveness under different attack invasiveness. In every setting, at least one configuration of SpyShield consistently outperformed all benchmark algorithms, achieving the highest accuracy. The evaluation shows that SpyShield achieves strong performance and resilience across diverse settings and attack types. These findings highlight its potential as a robust and generalizable defense mechanism for securing federated learning models, while also opening new possibilities for collaborative strategies that move beyond centralized server-side analysis.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381036PMC
http://dx.doi.org/10.1038/s41598-025-16158-3DOI Listing

Publication Analysis

Top Keywords

poisoning attacks
16
defense mechanism
12
federated learning
12
data model
8
model poisoning
8
client cooperation
8
malicious clients
8
attack types
8
label flipping
8
flipping random
8

Similar Publications

Backdoor samples detection based on perturbation discrepancy consistency in pre-trained language models.

Neural Netw

August 2025

Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430000, Hubei, China. Electronic address:

The use of unvetted third-party and internet data renders pre-trained models susceptible to backdoor attacks. Detecting backdoor samples is critical to prevent backdoor activation during inference or injection during training. However, existing detection methods often require the defender to have access to the poisoned models, extra clean samples, or significant computational resources to detect backdoor samples, limiting their practicality.

View Article and Find Full Text PDF

Acute thallium poisoning: An autopsy case report and review of the literature.

Leg Med (Tokyo)

August 2025

Department of Legal Medicine, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno, Osaka 545-8585, Japan.

We report a case of death due to thallium (Tl) poisoning and discuss the findings with reference to the literature on Tl poisoning. A woman in her 20 s with a history of asthma developed symptoms such as coughing, vomiting, and fatigue, and visited a primary-care physician. She was rushed to another hospital on suspicion of an asthma attack, but her condition suddenly worsened and she died 3 days later.

View Article and Find Full Text PDF

Traditional machine learning (ML) relies on a centralized architecture, which makes it unsuitable for applications where data privacy is critical. Federated Learning (FL) addresses this issue by allowing multiple parties to collaboratively train models without sharing their raw data. However, FL is susceptible to data and model poisoning attacks that can severely disrupt the learning process.

View Article and Find Full Text PDF

Backdoor attacks present a significant threat to the reliability of machine learning models, including Graph Neural Networks (GNNs), by embedding triggers that manipulate model behavior. While many existing defenses focus on identifying these vulnerabilities, few address restoring model accuracy after an attack. This paper introduces a method for restoring the original accuracy of GNNs affected by backdoor attacks, a task complicated by the complex structure of graph data.

View Article and Find Full Text PDF

With the rise of the smart industry, machine learning (ML) has become a popular method to improve the security of the Industrial Internet of Things (IIoT) by training anomaly detection models. Federated learning (FL) is a distributed ML scheme that facilitates anomaly detection on IIoT by preserving data privacy and breaking data silos. However, poisoning attacks pose significant threats to FL, where adversaries upload poisoned local models to the aggregation server, thereby degrading model accuracy.

View Article and Find Full Text PDF