Paper
28 October 2024 FedGKD: personalized federated learning through grouping and distillation
Tianle Li, Sheng Lin, Pengfei Zhao, Ziqing Li, Jianshe Wang
Author Affiliations +
Proceedings Volume 13404, Fifth International Conference on Control, Robotics, and Intelligent System (CCRIS 2024); 134040E (2024) https://doi.org/10.1117/12.3050605
Event: Fifth International Conference on Control, Robotics, and Intelligent System (2024), 2024, Macau, China
Abstract
As is well known, the objective of traditional Federated Learning (FL) is to train a global model collaboratively across multiple clients without directly accessing client data. However, traditional federated learning is frequently impeded by the heterogeneity of data, targets, and models. This work proposes a novel paradigm for federated learning, namely Federated Group Distillation (FedGKD). First, clients are grouped according to their needs and conditions. Subsequently, a knowledge distillation strategy, designated as DML, is employed to perform local distillation and global distillation on the grouped clients sequentially until the model converges. The experiments demonstrate that FedGKD is capable of effectively addressing the aforementioned three types of heterogeneous interference. Furthermore, clients can benefit from FedGKD with diverse tasks and models, ultimately achieving enhanced performance. Additionally, FedGKD is capable of reducing the load on the server to a certain extent.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Tianle Li, Sheng Lin, Pengfei Zhao, Ziqing Li, and Jianshe Wang "FedGKD: personalized federated learning through grouping and distillation", Proc. SPIE 13404, Fifth International Conference on Control, Robotics, and Intelligent System (CCRIS 2024), 134040E (28 October 2024); https://doi.org/10.1117/12.3050605
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Machine learning

Education and training

Head

Performance modeling

Laser sintering

Data communications

Back to Top