Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles
As a state-of-the-art distributed learning approach, federated learning has gained much popularity in connected and autonomous vehicles (CAVs). In federated learning, models are trained locally, and only model parameters instead of raw data are exchanged to aggregate a global model. Compared with tr...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10332177/ |
_version_ | 1797392268566462464 |
---|---|
author | Chi Cui Haiping Du Zhijuan Jia Xiaofei Zhang Yuchu He Yanyan Yang |
author_facet | Chi Cui Haiping Du Zhijuan Jia Xiaofei Zhang Yuchu He Yanyan Yang |
author_sort | Chi Cui |
collection | DOAJ |
description | As a state-of-the-art distributed learning approach, federated learning has gained much popularity in connected and autonomous vehicles (CAVs). In federated learning, models are trained locally, and only model parameters instead of raw data are exchanged to aggregate a global model. Compared with traditional learning approaches, the enhanced privacy protection and relieved network bandwidth provided by federated learning make it more favorable in CAVs. On the other hand, poisoning attack, which can break the integrity of the trained model by injecting crafted perturbations to the training samples, has become a major threat to deep learning in recent years. It has been shown that the distributed nature of federated learning makes it more vulnerable to poisoning attacks. In view of this situation, the strategies and attacking methods of the adversaries are worth studying. In this paper, two novel optimization-based black-box and clean-label data poisoning attacking methods are proposed. Poisoning perturbations are generated using particle swarm optimization hybrid with simulated annealing and genetic algorithm respectively. The attacking methods are evaluated by experiments conducted on the example of traffic sign recognition system on CAVs, and the results show that the prediction accuracy of the global model is significantly downgraded even with a small portion of poisoned data using the proposed methods. |
first_indexed | 2024-03-08T23:44:48Z |
format | Article |
id | doaj.art-85b521baf78c48348394ff52bc1bd10d |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-08T23:44:48Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-85b521baf78c48348394ff52bc1bd10d2023-12-14T00:02:00ZengIEEEIEEE Access2169-35362023-01-011113636113636910.1109/ACCESS.2023.333763810332177Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous VehiclesChi Cui0https://orcid.org/0000-0003-4948-1338Haiping Du1https://orcid.org/0000-0002-3439-3821Zhijuan Jia2https://orcid.org/0000-0003-4730-2906Xiaofei Zhang3Yuchu He4https://orcid.org/0000-0003-3131-7601Yanyan Yang5https://orcid.org/0000-0002-6053-3410Department of Computer Science and Technology, Zhengzhou Normal University, Zhengzhou, ChinaSchool of Electrical, Computer and Telecommunications Engineering, University of Wollongong, Wollongong, NSW, AustraliaDepartment of Computer Science and Technology, Zhengzhou Normal University, Zhengzhou, ChinaSchool of Electrical, Computer and Telecommunications Engineering, University of Wollongong, Wollongong, NSW, AustraliaDepartment of Computer Science and Technology, Zhengzhou Normal University, Zhengzhou, ChinaDepartment of Computer Science and Technology, Zhengzhou Normal University, Zhengzhou, ChinaAs a state-of-the-art distributed learning approach, federated learning has gained much popularity in connected and autonomous vehicles (CAVs). In federated learning, models are trained locally, and only model parameters instead of raw data are exchanged to aggregate a global model. Compared with traditional learning approaches, the enhanced privacy protection and relieved network bandwidth provided by federated learning make it more favorable in CAVs. On the other hand, poisoning attack, which can break the integrity of the trained model by injecting crafted perturbations to the training samples, has become a major threat to deep learning in recent years. It has been shown that the distributed nature of federated learning makes it more vulnerable to poisoning attacks. In view of this situation, the strategies and attacking methods of the adversaries are worth studying. In this paper, two novel optimization-based black-box and clean-label data poisoning attacking methods are proposed. Poisoning perturbations are generated using particle swarm optimization hybrid with simulated annealing and genetic algorithm respectively. The attacking methods are evaluated by experiments conducted on the example of traffic sign recognition system on CAVs, and the results show that the prediction accuracy of the global model is significantly downgraded even with a small portion of poisoned data using the proposed methods.https://ieeexplore.ieee.org/document/10332177/Connected and autonomous vehiclesdata poisoning attacksfederated learninghybrid particle swarm optimization |
spellingShingle | Chi Cui Haiping Du Zhijuan Jia Xiaofei Zhang Yuchu He Yanyan Yang Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles IEEE Access Connected and autonomous vehicles data poisoning attacks federated learning hybrid particle swarm optimization |
title | Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles |
title_full | Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles |
title_fullStr | Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles |
title_full_unstemmed | Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles |
title_short | Data Poisoning Attacks With Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles |
title_sort | data poisoning attacks with hybrid particle swarm optimization algorithms against federated learning in connected and autonomous vehicles |
topic | Connected and autonomous vehicles data poisoning attacks federated learning hybrid particle swarm optimization |
url | https://ieeexplore.ieee.org/document/10332177/ |
work_keys_str_mv | AT chicui datapoisoningattackswithhybridparticleswarmoptimizationalgorithmsagainstfederatedlearninginconnectedandautonomousvehicles AT haipingdu datapoisoningattackswithhybridparticleswarmoptimizationalgorithmsagainstfederatedlearninginconnectedandautonomousvehicles AT zhijuanjia datapoisoningattackswithhybridparticleswarmoptimizationalgorithmsagainstfederatedlearninginconnectedandautonomousvehicles AT xiaofeizhang datapoisoningattackswithhybridparticleswarmoptimizationalgorithmsagainstfederatedlearninginconnectedandautonomousvehicles AT yuchuhe datapoisoningattackswithhybridparticleswarmoptimizationalgorithmsagainstfederatedlearninginconnectedandautonomousvehicles AT yanyanyang datapoisoningattackswithhybridparticleswarmoptimizationalgorithmsagainstfederatedlearninginconnectedandautonomousvehicles |