Study of attacks on federated learning
In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep ne...
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project (FYP) |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/154018 |
_version_ | 1826109716530462720 |
---|---|
author | Thung, Jia Cheng |
author2 | Yeo Chai Kiat |
author_facet | Yeo Chai Kiat Thung, Jia Cheng |
author_sort | Thung, Jia Cheng |
collection | NTU |
description | In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep neural networks as model updates are shared with a central server. However, this decentralised form of machine learning gives rise to new security threats by potentially malicious participants. This project will study a targeted data poisoning attack against Federated Learning, also known as label flipping attack. The attack aims to poison the global model by sending model updates from misclassified datasets. The project looks at the various factors that determine the impact of the attack on the global model. It starts with demonstrating how the attack causes substantial drops in the classification accuracy and class recall, even with a small percentage of malicious participants. The project then progresses to studying the impact of targeting multiple classes compared to a single class. Finally, the longevity of attack in early or late round training and malicious participant availability are studied before determining the relationship between the two. A defence strategy is proposed by identifying the malicious participants who sent model updates, causing dissimilar gradients. |
first_indexed | 2024-10-01T02:22:31Z |
format | Final Year Project (FYP) |
id | ntu-10356/154018 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T02:22:31Z |
publishDate | 2021 |
publisher | Nanyang Technological University |
record_format | dspace |
spelling | ntu-10356/1540182021-12-17T01:46:50Z Study of attacks on federated learning Thung, Jia Cheng Yeo Chai Kiat School of Computer Science and Engineering ASCKYEO@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence In today’s era, people are becoming more aware of the data privacy issues that traditional centralised machine learning can cause while bringing convenience to every day’s lives. To tackle the problem, Federated Learning becomes an emerging alternative for distributed training of large scale deep neural networks as model updates are shared with a central server. However, this decentralised form of machine learning gives rise to new security threats by potentially malicious participants. This project will study a targeted data poisoning attack against Federated Learning, also known as label flipping attack. The attack aims to poison the global model by sending model updates from misclassified datasets. The project looks at the various factors that determine the impact of the attack on the global model. It starts with demonstrating how the attack causes substantial drops in the classification accuracy and class recall, even with a small percentage of malicious participants. The project then progresses to studying the impact of targeting multiple classes compared to a single class. Finally, the longevity of attack in early or late round training and malicious participant availability are studied before determining the relationship between the two. A defence strategy is proposed by identifying the malicious participants who sent model updates, causing dissimilar gradients. Bachelor of Engineering (Computer Science) 2021-12-17T01:46:49Z 2021-12-17T01:46:49Z 2021 Final Year Project (FYP) Thung, J. C. (2021). Study of attacks on federated learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/154018 https://hdl.handle.net/10356/154018 en SCSE20-0799 application/pdf Nanyang Technological University |
spellingShingle | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Thung, Jia Cheng Study of attacks on federated learning |
title | Study of attacks on federated learning |
title_full | Study of attacks on federated learning |
title_fullStr | Study of attacks on federated learning |
title_full_unstemmed | Study of attacks on federated learning |
title_short | Study of attacks on federated learning |
title_sort | study of attacks on federated learning |
topic | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
url | https://hdl.handle.net/10356/154018 |
work_keys_str_mv | AT thungjiacheng studyofattacksonfederatedlearning |