Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles

As a state-of-the-art distributed learning approach, federated learning has gained much popularity in connected and autonomous vehicles (CAVs). In federated learning, models are trained locally, and only model parameters instead of raw data are exchanged to aggregate a global model. Compared with tr...

Full description

Bibliographic Details
Main Authors: Chi Cui, Haiping Du, Zhijuan Jia, Xiaofei Zhang, Yuchu He, Yanyan Yang
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10332177/
Description
Summary:As a state-of-the-art distributed learning approach, federated learning has gained much popularity in connected and autonomous vehicles (CAVs). In federated learning, models are trained locally, and only model parameters instead of raw data are exchanged to aggregate a global model. Compared with traditional learning approaches, the enhanced privacy protection and relieved network bandwidth provided by federated learning make it more favorable in CAVs. On the other hand, poisoning attack, which can break the integrity of the trained model by injecting crafted perturbations to the training samples, has become a major threat to deep learning in recent years. It has been shown that the distributed nature of federated learning makes it more vulnerable to poisoning attacks. In view of this situation, the strategies and attacking methods of the adversaries are worth studying. In this paper, two novel optimization-based black-box and clean-label data poisoning attacking methods are proposed. Poisoning perturbations are generated using particle swarm optimization hybrid with simulated annealing and genetic algorithm respectively. The attacking methods are evaluated by experiments conducted on the example of traffic sign recognition system on CAVs, and the results show that the prediction accuracy of the global model is significantly downgraded even with a small portion of poisoned data using the proposed methods.
ISSN:2169-3536