A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

DFedADMM: Dual Constraint Controlled Model Inconsistency for Decentralize Federated Learning. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

To address the communication burden issues associated with Federated Learning (FL), Decentralized Federated Learning (DFL) discards the central server and establishes a decentralized communication network, where each client communicates only with neighboring clients. However, existing DFL methods still suffer from two major challenges: local inconsistency and local heterogeneous overfitting, which existing DFL methods have not fundamentally addressed. To tackle these issues, we propose novel DFL algorithms, DFedADMM and its enhanced version DFedADMM-SAM, to improve the performance for DFL. The DFedADMM algorithm employs primal-dual optimization (ADMM) by utilizing dual variables to control the model inconsistency raised from the decentralized heterogeneous data distributions. The DFedADMM-SAM algorithm further improves on DFedADMM by employing a Sharpness-Aware Minimization (SAM) optimizer, which uses gradient perturbations to generate locally flat models and searches for models with uniformly low loss values to mitigate local heterogeneous overfitting. Theoretically, we derive convergence rates of $\mathcal {O}(\frac{1}{\sqrt{KT}}+\frac{1}{KT(1-\psi )^{2}})$O(1KT+1KT(1-ψ)2) and $ \mathcal {O}(\frac{1}{\sqrt{KT}}+\frac{1}{KT(1-\psi )^{2}}+ \frac{1}{T^{3/2}K^{1/2}})$O(1KT+1KT(1-ψ)2+1T3/2K1/2) in the non-convex setting for DFedADMM and DFedADMM-SAM, respectively, where $1 - \psi$1-ψ represents the spectral gap of the gossip matrix. Empirically, extensive experiments on MNIST, CIFAR10, and CIFAR100 datasets demonstrate that our algorithms exhibit superior performance in terms of generalization, convergence speed, and communication overhead compared to existing state-of-the-art (SOTA) optimizers in DFL.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2025.3546659DOI Listing

Publication Analysis

Top Keywords

federated learning
12
model inconsistency
8
existing dfl
8
dfl methods
8
local heterogeneous
8
heterogeneous overfitting
8
dfl
6
dfedadmm
5
dfedadmm dual
4
dual constraint
4

Similar Publications