Homogeneous Learning: Self-Attention Decentralized Deep Learning

Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communicati...

Full description

Bibliographic Details
Main Authors: Yuwei Sun, Hideya Ochiai
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9680704/
_version_ 1818955918433320960
author Yuwei Sun
Hideya Ochiai
author_facet Yuwei Sun
Hideya Ochiai
author_sort Yuwei Sun
collection DOAJ
description Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communication and vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peer-to-peer communication among distributed nodes, without the central server. One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of model training. To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism. In HL, training performs on each round’s selected node, and the trained model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the self-attention mechanism leverages reinforcement learning to observe a node’s inner state and its surrounding environment’s state, and find out which node should be selected to optimize the training. We evaluate our method with various scenarios for two different image classification tasks. The result suggests that HL can achieve a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8% and the communication cost by 74.6% for decentralized learning with non-IID data.
first_indexed 2024-12-20T10:45:42Z
format Article
id doaj.art-52815f1ca12d46ab9b7497d3fe0f7548
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-20T10:45:42Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-52815f1ca12d46ab9b7497d3fe0f75482022-12-21T19:43:24ZengIEEEIEEE Access2169-35362022-01-01107695770310.1109/ACCESS.2022.31428999680704Homogeneous Learning: Self-Attention Decentralized Deep LearningYuwei Sun0https://orcid.org/0000-0001-7315-8034Hideya Ochiai1https://orcid.org/0000-0002-4568-6726Graduate School of Information Science and Technology, University of Tokyo, Tokyo, JapanGraduate School of Information Science and Technology, University of Tokyo, Tokyo, JapanFederated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communication and vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peer-to-peer communication among distributed nodes, without the central server. One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of model training. To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism. In HL, training performs on each round’s selected node, and the trained model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the self-attention mechanism leverages reinforcement learning to observe a node’s inner state and its surrounding environment’s state, and find out which node should be selected to optimize the training. We evaluate our method with various scenarios for two different image classification tasks. The result suggests that HL can achieve a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8% and the communication cost by 74.6% for decentralized learning with non-IID data.https://ieeexplore.ieee.org/document/9680704/Collective intelligencedistributed computingknowledge transfermulti-layer neural networksupervised learning
spellingShingle Yuwei Sun
Hideya Ochiai
Homogeneous Learning: Self-Attention Decentralized Deep Learning
IEEE Access
Collective intelligence
distributed computing
knowledge transfer
multi-layer neural network
supervised learning
title Homogeneous Learning: Self-Attention Decentralized Deep Learning
title_full Homogeneous Learning: Self-Attention Decentralized Deep Learning
title_fullStr Homogeneous Learning: Self-Attention Decentralized Deep Learning
title_full_unstemmed Homogeneous Learning: Self-Attention Decentralized Deep Learning
title_short Homogeneous Learning: Self-Attention Decentralized Deep Learning
title_sort homogeneous learning self attention decentralized deep learning
topic Collective intelligence
distributed computing
knowledge transfer
multi-layer neural network
supervised learning
url https://ieeexplore.ieee.org/document/9680704/
work_keys_str_mv AT yuweisun homogeneouslearningselfattentiondecentralizeddeeplearning
AT hideyaochiai homogeneouslearningselfattentiondecentralizeddeeplearning