A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

FedLGA: Toward System-Heterogeneity of Federated Learning via Local Gradient Approximation. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Federated learning (FL) is a decentralized machine learning architecture, which leverages a large number of remote devices to learn a joint model with distributed training data. However, the system-heterogeneity is one major challenge in an FL network to achieve robust distributed learning performance, which comes from two aspects: 1) device-heterogeneity due to the diverse computational capacity among devices and 2) data-heterogeneity due to the nonidentically distributed data across the network. Prior studies addressing the heterogeneous FL issue, for example, FedProx, lack formalization and it remains an open problem. This work first formalizes the system-heterogeneous FL problem and proposes a new algorithm, called federated local gradient approximation (FedLGA), to address this problem by bridging the divergence of local model updates via gradient approximation. To achieve this, FedLGA provides an alternated Hessian estimation method, which only requires extra linear complexity on the aggregator. Theoretically, we show that with a device-heterogeneous ratio ρ , FedLGA achieves convergence rates on non-i.i.d. distributed FL training data for the nonconvex optimization problems with O ([(1+ρ)/√{ENT}] + 1/T) and O ([(1+ρ)√E/√{TK}] + 1/T) for full and partial device participation, respectively, where E is the number of local learning epoch, T is the number of total communication round, N is the total device number, and K is the number of the selected device in one communication round under partially participation scheme. The results of comprehensive experiments on multiple datasets indicate that FedLGA can effectively address the system-heterogeneous problem and outperform current FL methods. Specifically, the performance against the CIFAR-10 dataset shows that, compared with FedAvg, FedLGA improves the model's best testing accuracy from 60.91% to 64.44%.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2023.3247365DOI Listing

Publication Analysis

Top Keywords

gradient approximation
12
federated learning
8
local gradient
8
distributed training
8
training data
8
system-heterogeneous problem
8
communication round
8
fedlga
6
learning
5
number
5

Similar Publications