Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks

Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquire...

Full description

Bibliographic Details
Main Authors: Tu Viet Nguyen, Nhan Duc Ho, Hieu Thien Hoang, Cuong Danh Do, Kok-Seng Wong
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9924192/
_version_ 1797989529085280256
author Tu Viet Nguyen
Nhan Duc Ho
Hieu Thien Hoang
Cuong Danh Do
Kok-Seng Wong
author_facet Tu Viet Nguyen
Nhan Duc Ho
Hieu Thien Hoang
Cuong Danh Do
Kok-Seng Wong
author_sort Tu Viet Nguyen
collection DOAJ
description Federated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central server for aggregation. However, because local FL parameters and the global FL model are transmitted over wireless links, wireless network performance will affect FL training performance. In particular, the number of resource blocks is limited; thus, the number of devices participating in FL is limited. Furthermore, edge nodes often have substantial constraints on their resources, such as memory, computation power, communication, and energy, severely limiting their capability to train large models locally. This paper proposes a two-hop communication protocol with a dynamic resource allocation strategy to investigate the possibility of bandwidth allocation from a limited network resource to the maximum number of clients participating in FL. In particular, we utilize an ordinary hierarchical FL with an adaptive grouping mechanism to select participating clients and elect a leader for each group based on its capability to upload the aggregated parameters to the central server. Our experimental results demonstrate that the proposed solution outperforms the baseline algorithm in terms of communication cost and model accuracy.
first_indexed 2024-04-11T08:20:47Z
format Article
id doaj.art-d4fff239d6e2460397a5fc61eb99bbe0
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-11T08:20:47Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-d4fff239d6e2460397a5fc61eb99bbe02022-12-22T04:34:58ZengIEEEIEEE Access2169-35362022-01-011011191011192210.1109/ACCESS.2022.32157589924192Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications NetworksTu Viet Nguyen0https://orcid.org/0000-0002-6459-4624Nhan Duc Ho1Hieu Thien Hoang2https://orcid.org/0000-0002-5435-8468Cuong Danh Do3Kok-Seng Wong4https://orcid.org/0000-0002-2029-7644Wireless Communications and Connectivity Division, Broadcom Ltd., San Diego, CA, USACollege of Engineering and Computer Science, VinUniversity, Hanoi, VietnamCollege of Engineering and Computer Science, VinUniversity, Hanoi, VietnamCollege of Engineering and Computer Science, VinUniversity, Hanoi, VietnamCollege of Engineering and Computer Science, VinUniversity, Hanoi, VietnamFederated learning (FL) has recently received considerable attention and is becoming a popular machine learning (ML) framework that allows clients to train machine learning models in a decentralized fashion without sharing any private dataset. In the FL framework, data for learning tasks are acquired and processed locally at the edge node, and only the updated ML parameters are transmitted to the central server for aggregation. However, because local FL parameters and the global FL model are transmitted over wireless links, wireless network performance will affect FL training performance. In particular, the number of resource blocks is limited; thus, the number of devices participating in FL is limited. Furthermore, edge nodes often have substantial constraints on their resources, such as memory, computation power, communication, and energy, severely limiting their capability to train large models locally. This paper proposes a two-hop communication protocol with a dynamic resource allocation strategy to investigate the possibility of bandwidth allocation from a limited network resource to the maximum number of clients participating in FL. In particular, we utilize an ordinary hierarchical FL with an adaptive grouping mechanism to select participating clients and elect a leader for each group based on its capability to upload the aggregated parameters to the central server. Our experimental results demonstrate that the proposed solution outperforms the baseline algorithm in terms of communication cost and model accuracy.https://ieeexplore.ieee.org/document/9924192/Federated learningdistributed machine learningmulti-hop wireless networkscommunication-efficiencybandwidth optimization
spellingShingle Tu Viet Nguyen
Nhan Duc Ho
Hieu Thien Hoang
Cuong Danh Do
Kok-Seng Wong
Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks
IEEE Access
Federated learning
distributed machine learning
multi-hop wireless networks
communication-efficiency
bandwidth optimization
title Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks
title_full Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks
title_fullStr Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks
title_full_unstemmed Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks
title_short Toward Efficient Hierarchical Federated Learning Design Over Multi-Hop Wireless Communications Networks
title_sort toward efficient hierarchical federated learning design over multi hop wireless communications networks
topic Federated learning
distributed machine learning
multi-hop wireless networks
communication-efficiency
bandwidth optimization
url https://ieeexplore.ieee.org/document/9924192/
work_keys_str_mv AT tuvietnguyen towardefficienthierarchicalfederatedlearningdesignovermultihopwirelesscommunicationsnetworks
AT nhanducho towardefficienthierarchicalfederatedlearningdesignovermultihopwirelesscommunicationsnetworks
AT hieuthienhoang towardefficienthierarchicalfederatedlearningdesignovermultihopwirelesscommunicationsnetworks
AT cuongdanhdo towardefficienthierarchicalfederatedlearningdesignovermultihopwirelesscommunicationsnetworks
AT koksengwong towardefficienthierarchicalfederatedlearningdesignovermultihopwirelesscommunicationsnetworks