Poisoning Network Flow Classifiers
As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical. This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers. We investig...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
ACM|Annual Computer Security Applications Conference
2024
|
Online Access: | https://hdl.handle.net/1721.1/153298 |
_version_ | 1811081661199155200 |
---|---|
author | Severi, Giorgio Boboila, Simona Oprea, Alina Holodnak, John Kratkiewicz, Kendra Matterer, Jason |
author2 | Lincoln Laboratory |
author_facet | Lincoln Laboratory Severi, Giorgio Boboila, Simona Oprea, Alina Holodnak, John Kratkiewicz, Kendra Matterer, Jason |
author_sort | Severi, Giorgio |
collection | MIT |
description | As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical. This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers. We investigate the challenging scenario of clean-label poisoning where the adversary’s capabilities are constrained to tampering only with the training data — without the ability to arbitrarily modify the training labels or any other component of the training process. We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates. Finally, we design novel strategies to generate stealthy triggers, including an approach based on generative Bayesian network models, with the goal of minimizing the conspicuousness of the trigger, and thus making detection of an ongoing poisoning campaign more challenging. Our findings provide significant insights into the feasibility of poisoning attacks on network traffic classifiers used in multiple scenarios, including detecting malicious communication and application classification. |
first_indexed | 2024-09-23T11:50:23Z |
format | Article |
id | mit-1721.1/153298 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T11:50:23Z |
publishDate | 2024 |
publisher | ACM|Annual Computer Security Applications Conference |
record_format | dspace |
spelling | mit-1721.1/1532982024-07-11T18:59:28Z Poisoning Network Flow Classifiers Severi, Giorgio Boboila, Simona Oprea, Alina Holodnak, John Kratkiewicz, Kendra Matterer, Jason Lincoln Laboratory As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical. This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers. We investigate the challenging scenario of clean-label poisoning where the adversary’s capabilities are constrained to tampering only with the training data — without the ability to arbitrarily modify the training labels or any other component of the training process. We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates. Finally, we design novel strategies to generate stealthy triggers, including an approach based on generative Bayesian network models, with the goal of minimizing the conspicuousness of the trigger, and thus making detection of an ongoing poisoning campaign more challenging. Our findings provide significant insights into the feasibility of poisoning attacks on network traffic classifiers used in multiple scenarios, including detecting malicious communication and application classification. 2024-01-10T18:18:44Z 2024-01-10T18:18:44Z 2023-12-04 2024-01-01T08:51:16Z Article http://purl.org/eprint/type/ConferencePaper 979-8-4007-0886-2 https://hdl.handle.net/1721.1/153298 Severi, Giorgio, Boboila, Simona, Oprea, Alina, Holodnak, John, Kratkiewicz, Kendra et al. 2023. "Poisoning Network Flow Classifiers." en https://doi.org/10.1145/3627106.3627123 Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The author(s) application/pdf ACM|Annual Computer Security Applications Conference Association for Computing Machinery |
spellingShingle | Severi, Giorgio Boboila, Simona Oprea, Alina Holodnak, John Kratkiewicz, Kendra Matterer, Jason Poisoning Network Flow Classifiers |
title | Poisoning Network Flow Classifiers |
title_full | Poisoning Network Flow Classifiers |
title_fullStr | Poisoning Network Flow Classifiers |
title_full_unstemmed | Poisoning Network Flow Classifiers |
title_short | Poisoning Network Flow Classifiers |
title_sort | poisoning network flow classifiers |
url | https://hdl.handle.net/1721.1/153298 |
work_keys_str_mv | AT severigiorgio poisoningnetworkflowclassifiers AT boboilasimona poisoningnetworkflowclassifiers AT opreaalina poisoningnetworkflowclassifiers AT holodnakjohn poisoningnetworkflowclassifiers AT kratkiewiczkendra poisoningnetworkflowclassifiers AT mattererjason poisoningnetworkflowclassifiers |