A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Communication-Efficient Federated Multi-View Clustering. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Federated multi-view clustering is an emerging machine learning paradigm that groups the data with each view distributed on an isolated client while preserving their privacies. Although recent researches have proposed a few feasible solutions, they are severely limited by two drawbacks. In specific, the clients are required to share their data representations at each iteration of model training, leading to heavy communication overhead. On the other hand, existing researches handle large-scale data by employing the matrix factorization and neural network encoding techniques, failing to utilize their similarity information sufficiently. To address these issues, we propose a communication-efficient federated multi-view clustering framework by approximating the data representation with pseudo-label and centroid matrix, where the latter two are shared in model training. Meanwhile, the framework is instanced by incorporating linear kernel function to consider the data pairwise similarities. Note that, corresponding linear kernels are not required to compute explicitly, making the resultant method able to be optimized in linear complexity to the number of samples. Nevertheless, the proposed method is evaluated on benchmark datasets. It not only achieves inspiring results (26.84% accuracy improvement on average, 2.9$_\times$-2153$_\times$ computation speedup and 98.4% communication overhead reduction at most) compared with existing federated multi-view clustering methods, but also outperforms centralized multi-view clustering approaches on performance and computation efficiency.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2025.3601533DOI Listing

Publication Analysis

Top Keywords

multi-view clustering
20
federated multi-view
16
communication-efficient federated
8
model training
8
communication overhead
8
multi-view
5
clustering
5
data
5
clustering federated
4
clustering emerging
4

Similar Publications