DTA: distribution transform-based attack for query-limited scenario
In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials during an attack. This may be unacceptable in real applications...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Journal Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179697 |
_version_ | 1826124846281523200 |
---|---|
author | Liu, Renyang Zhou, Wei Jin, Xin Gao, Song Wang, Yuanyu Wang, Ruxin |
author2 | School of Computer Science and Engineering |
author_facet | School of Computer Science and Engineering Liu, Renyang Zhou, Wei Jin, Xin Gao, Song Wang, Yuanyu Wang, Ruxin |
author_sort | Liu, Renyang |
collection | NTU |
description | In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials during an attack. This may be unacceptable in real applications since Machine Learning as a Service Platform (MLaaS) usually only returns the final result (i.e., hard-label) to the client and a system equipped with certain defense mechanisms could easily detect malicious queries. By contrast, a feasible way is a hard-label attack that simulates an attacked action being permitted to conduct a limited number of queries. To implement this idea, in this paper, we bypass the dependency on the to-be-attacked model and benefit from the characteristics of the distributions of adversarial examples to reformulate the attack problem in a distribution transform manner and propose a distribution transform-based attack (DTA). DTA builds a statistical mapping from the benign example to its adversarial counterparts by tackling the conditional likelihood under the hard-label black-box settings. In this way, it is no longer necessary to query the target model frequently. A well-trained DTA model can directly and efficiently generate a batch of adversarial examples for a certain input, which can be used to attack un-seen models based on the assumed transferability. Furthermore, we surprisingly find that the well-trained DTA model is not sensitive to the semantic spaces of the training dataset, meaning that the model yields acceptable attack performance on other datasets. Extensive experiments validate the effectiveness of the proposed idea and the superiority of DTA over the state-of-the-art. |
first_indexed | 2024-10-01T06:27:02Z |
format | Journal Article |
id | ntu-10356/179697 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T06:27:02Z |
publishDate | 2024 |
record_format | dspace |
spelling | ntu-10356/1796972024-08-23T15:36:01Z DTA: distribution transform-based attack for query-limited scenario Liu, Renyang Zhou, Wei Jin, Xin Gao, Song Wang, Yuanyu Wang, Ruxin School of Computer Science and Engineering Computer and Information Science Distribution transform-based attack Query-limited adversarial attack In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials during an attack. This may be unacceptable in real applications since Machine Learning as a Service Platform (MLaaS) usually only returns the final result (i.e., hard-label) to the client and a system equipped with certain defense mechanisms could easily detect malicious queries. By contrast, a feasible way is a hard-label attack that simulates an attacked action being permitted to conduct a limited number of queries. To implement this idea, in this paper, we bypass the dependency on the to-be-attacked model and benefit from the characteristics of the distributions of adversarial examples to reformulate the attack problem in a distribution transform manner and propose a distribution transform-based attack (DTA). DTA builds a statistical mapping from the benign example to its adversarial counterparts by tackling the conditional likelihood under the hard-label black-box settings. In this way, it is no longer necessary to query the target model frequently. A well-trained DTA model can directly and efficiently generate a batch of adversarial examples for a certain input, which can be used to attack un-seen models based on the assumed transferability. Furthermore, we surprisingly find that the well-trained DTA model is not sensitive to the semantic spaces of the training dataset, meaning that the model yields acceptable attack performance on other datasets. Extensive experiments validate the effectiveness of the proposed idea and the superiority of DTA over the state-of-the-art. Published version This work is supported in part by the National Natural Science Foundation of China under Grant 62162067, 62101480 and 62362068, Research and Application of Object Detection based on Artificial Intelligence, in part by the Yunnan Province expert workstations under Grant 202305AF150078 and the Scientific Research Fund Project of Yunnan Provincial Education Department under 2023Y0249. 2024-08-19T01:36:31Z 2024-08-19T01:36:31Z 2024 Journal Article Liu, R., Zhou, W., Jin, X., Gao, S., Wang, Y. & Wang, R. (2024). DTA: distribution transform-based attack for query-limited scenario. Cybersecurity, 7(1). https://dx.doi.org/10.1186/s42400-023-00197-2 2523-3246 https://hdl.handle.net/10356/179697 10.1186/s42400-023-00197-2 2-s2.0-85189137456 1 7 en Cybersecurity © 2024 The Author(s). Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. application/pdf |
spellingShingle | Computer and Information Science Distribution transform-based attack Query-limited adversarial attack Liu, Renyang Zhou, Wei Jin, Xin Gao, Song Wang, Yuanyu Wang, Ruxin DTA: distribution transform-based attack for query-limited scenario |
title | DTA: distribution transform-based attack for query-limited scenario |
title_full | DTA: distribution transform-based attack for query-limited scenario |
title_fullStr | DTA: distribution transform-based attack for query-limited scenario |
title_full_unstemmed | DTA: distribution transform-based attack for query-limited scenario |
title_short | DTA: distribution transform-based attack for query-limited scenario |
title_sort | dta distribution transform based attack for query limited scenario |
topic | Computer and Information Science Distribution transform-based attack Query-limited adversarial attack |
url | https://hdl.handle.net/10356/179697 |
work_keys_str_mv | AT liurenyang dtadistributiontransformbasedattackforquerylimitedscenario AT zhouwei dtadistributiontransformbasedattackforquerylimitedscenario AT jinxin dtadistributiontransformbasedattackforquerylimitedscenario AT gaosong dtadistributiontransformbasedattackforquerylimitedscenario AT wangyuanyu dtadistributiontransformbasedattackforquerylimitedscenario AT wangruxin dtadistributiontransformbasedattackforquerylimitedscenario |