Poisoning Attacks in Federated Learning: A Survey
Federated learning faces many security and privacy issues. Among them, poisoning attacks can significantly impact global models, and malicious attackers can prevent global models from converging or even manipulating the prediction results of global models. Defending against poisoning attacks is a ve...
Main Authors: | Geming Xia, Jian Chen, Chaodong Yu, Jun Ma |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10024252/ |
Similar Items
-
CCF Based System Framework In Federated Learning Against Data Poisoning Attacks
by: Ibrahim M. Ahmed, et al.
Published: (2022-11-01) -
Deep Model Poisoning Attack on Federated Learning
by: Xingchen Zhou, et al.
Published: (2021-03-01) -
MPHM: Model poisoning attacks on federal learning using historical information momentum
by: Shi Lei, et al.
Published: (2023-01-01) -
FLGQM: Robust Federated Learning Based on Geometric and Qualitative Metrics
by: Shangdong Liu, et al.
Published: (2023-12-01) -
A Federated Learning Framework against Data Poisoning Attacks on the Basis of the Genetic Algorithm
by: Ran Zhai, et al.
Published: (2023-01-01)