Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models
Federated learning (FL) has been broadly adopted in both academia and industry in recent years. As a bridge to connect the so-called “data islands”, FL has contributed greatly to promoting data utilization. In particular, FL enables disjoint entities to cooperatively train a shared model, while prot...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-09-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/12/18/3984 |
_version_ | 1797580397810286592 |
---|---|
author | Bing Han Qiang Fu Xinliang Zhang |
author_facet | Bing Han Qiang Fu Xinliang Zhang |
author_sort | Bing Han |
collection | DOAJ |
description | Federated learning (FL) has been broadly adopted in both academia and industry in recent years. As a bridge to connect the so-called “data islands”, FL has contributed greatly to promoting data utilization. In particular, FL enables disjoint entities to cooperatively train a shared model, while protecting each participant’s data privacy. However, current FL frameworks cannot offer privacy protection and reduce the computation overhead at the same time. Therefore, its implementation in practical scenarios, such as edge computing, is limited. In this paper, we propose a novel FL framework with spiking neuron models and differential privacy, which simultaneously provides theoretically guaranteed privacy protection and achieves low energy consumption. We model the local forward propagation process in a discrete way similar to nerve signal travel in the human brain. Since neurons only fire when the accumulated membrane potential exceeds a threshold, spiking neuron models require significantly lower energy compared to traditional neural networks. In addition, to protect sensitive information in model gradients, we add differently private noise in both the local training phase and server aggregation phase. Empirical evaluation results show that our proposal can effectively reduce the accuracy of membership inference attacks and property inference attacks, while maintaining a relatively low energy cost. blueFor example, the attack accuracy of a membership inference attack drops to 43% in some scenarios. As a result, our proposed FL framework can work well in large-scale cross-device learning scenarios. |
first_indexed | 2024-03-10T22:50:25Z |
format | Article |
id | doaj.art-94493fa50190466db6fdf1b33f8c9f7f |
institution | Directory Open Access Journal |
issn | 2079-9292 |
language | English |
last_indexed | 2024-03-10T22:50:25Z |
publishDate | 2023-09-01 |
publisher | MDPI AG |
record_format | Article |
series | Electronics |
spelling | doaj.art-94493fa50190466db6fdf1b33f8c9f7f2023-11-19T10:24:06ZengMDPI AGElectronics2079-92922023-09-011218398410.3390/electronics12183984Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron ModelsBing Han0Qiang Fu1Xinliang Zhang2China National Institute of Standardization, Beijing 100191, ChinaChina National Institute of Standardization, Beijing 100191, ChinaChina National Institute of Standardization, Beijing 100191, ChinaFederated learning (FL) has been broadly adopted in both academia and industry in recent years. As a bridge to connect the so-called “data islands”, FL has contributed greatly to promoting data utilization. In particular, FL enables disjoint entities to cooperatively train a shared model, while protecting each participant’s data privacy. However, current FL frameworks cannot offer privacy protection and reduce the computation overhead at the same time. Therefore, its implementation in practical scenarios, such as edge computing, is limited. In this paper, we propose a novel FL framework with spiking neuron models and differential privacy, which simultaneously provides theoretically guaranteed privacy protection and achieves low energy consumption. We model the local forward propagation process in a discrete way similar to nerve signal travel in the human brain. Since neurons only fire when the accumulated membrane potential exceeds a threshold, spiking neuron models require significantly lower energy compared to traditional neural networks. In addition, to protect sensitive information in model gradients, we add differently private noise in both the local training phase and server aggregation phase. Empirical evaluation results show that our proposal can effectively reduce the accuracy of membership inference attacks and property inference attacks, while maintaining a relatively low energy cost. blueFor example, the attack accuracy of a membership inference attack drops to 43% in some scenarios. As a result, our proposed FL framework can work well in large-scale cross-device learning scenarios.https://www.mdpi.com/2079-9292/12/18/3984federated learningprivacy protectionspiking neural networks |
spellingShingle | Bing Han Qiang Fu Xinliang Zhang Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models Electronics federated learning privacy protection spiking neural networks |
title | Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models |
title_full | Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models |
title_fullStr | Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models |
title_full_unstemmed | Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models |
title_short | Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models |
title_sort | towards privacy preserving federated neuromorphic learning via spiking neuron models |
topic | federated learning privacy protection spiking neural networks |
url | https://www.mdpi.com/2079-9292/12/18/3984 |
work_keys_str_mv | AT binghan towardsprivacypreservingfederatedneuromorphiclearningviaspikingneuronmodels AT qiangfu towardsprivacypreservingfederatedneuromorphiclearningviaspikingneuronmodels AT xinliangzhang towardsprivacypreservingfederatedneuromorphiclearningviaspikingneuronmodels |