Tiny adversarial multi-objective one-shot neural architecture search
Abstract The widely employed tiny neural networks (TNNs) in mobile devices are vulnerable to adversarial attacks. However, more advanced research on the robustness of TNNs is highly in demand. This work focuses on improving the robustness of TNNs without sacrificing the model’s accuracy. To find the...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2023-07-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-023-01139-8 |
_version_ | 1797647102495424512 |
---|---|
author | Guoyang Xie Jinbao Wang Guo Yu Jiayi Lyu Feng Zheng Yaochu Jin |
author_facet | Guoyang Xie Jinbao Wang Guo Yu Jiayi Lyu Feng Zheng Yaochu Jin |
author_sort | Guoyang Xie |
collection | DOAJ |
description | Abstract The widely employed tiny neural networks (TNNs) in mobile devices are vulnerable to adversarial attacks. However, more advanced research on the robustness of TNNs is highly in demand. This work focuses on improving the robustness of TNNs without sacrificing the model’s accuracy. To find the optimal trade-off networks in terms of the adversarial accuracy, clean accuracy, and model size, we present TAM-NAS, a tiny adversarial multi-objective one-shot network architecture search method. First, we build a novel search space comprised of new tiny blocks and channels to establish a balance between the model size and adversarial performance. Then, we demonstrate how the supernet facilitates the acquisition of the optimal subnet under white-box adversarial attacks, provided that the supernet significantly impacts the subnet’s performance. Concretely, we investigate a new adversarial training paradigm by evaluating the adversarial transferability, the width of the supernet, and the distinction between training subnets from scratch and fine-tuning. Finally, we undertake statistical analysis for the layer-wise combination of specific blocks and channels on the first non-dominated front, which can be utilized as a design guideline for the design of TNNs. |
first_indexed | 2024-03-11T15:11:31Z |
format | Article |
id | doaj.art-2595400a168e4b11899ac7f32ef98532 |
institution | Directory Open Access Journal |
issn | 2199-4536 2198-6053 |
language | English |
last_indexed | 2024-03-11T15:11:31Z |
publishDate | 2023-07-01 |
publisher | Springer |
record_format | Article |
series | Complex & Intelligent Systems |
spelling | doaj.art-2595400a168e4b11899ac7f32ef985322023-10-29T12:41:19ZengSpringerComplex & Intelligent Systems2199-45362198-60532023-07-01966117613810.1007/s40747-023-01139-8Tiny adversarial multi-objective one-shot neural architecture searchGuoyang Xie0Jinbao Wang1Guo Yu2Jiayi Lyu3Feng Zheng4Yaochu Jin5Department of Computer Science and Engineering, Southern University of Science and TechnologyDepartment of Computer Science and Engineering, Southern University of Science and TechnologyInstitute of Intelligent Manufacturing, Nanjing Tech UniversitySchool of Engineering Science, University of Chinese Academy of SciencesDepartment of Computer Science, Southern University of Science and TechnologyFaculty of Technology, Bielefeld UniversityAbstract The widely employed tiny neural networks (TNNs) in mobile devices are vulnerable to adversarial attacks. However, more advanced research on the robustness of TNNs is highly in demand. This work focuses on improving the robustness of TNNs without sacrificing the model’s accuracy. To find the optimal trade-off networks in terms of the adversarial accuracy, clean accuracy, and model size, we present TAM-NAS, a tiny adversarial multi-objective one-shot network architecture search method. First, we build a novel search space comprised of new tiny blocks and channels to establish a balance between the model size and adversarial performance. Then, we demonstrate how the supernet facilitates the acquisition of the optimal subnet under white-box adversarial attacks, provided that the supernet significantly impacts the subnet’s performance. Concretely, we investigate a new adversarial training paradigm by evaluating the adversarial transferability, the width of the supernet, and the distinction between training subnets from scratch and fine-tuning. Finally, we undertake statistical analysis for the layer-wise combination of specific blocks and channels on the first non-dominated front, which can be utilized as a design guideline for the design of TNNs.https://doi.org/10.1007/s40747-023-01139-8Tiny neural network architecture searchAdversarial attackOne-shot learningMulti-objective optimization |
spellingShingle | Guoyang Xie Jinbao Wang Guo Yu Jiayi Lyu Feng Zheng Yaochu Jin Tiny adversarial multi-objective one-shot neural architecture search Complex & Intelligent Systems Tiny neural network architecture search Adversarial attack One-shot learning Multi-objective optimization |
title | Tiny adversarial multi-objective one-shot neural architecture search |
title_full | Tiny adversarial multi-objective one-shot neural architecture search |
title_fullStr | Tiny adversarial multi-objective one-shot neural architecture search |
title_full_unstemmed | Tiny adversarial multi-objective one-shot neural architecture search |
title_short | Tiny adversarial multi-objective one-shot neural architecture search |
title_sort | tiny adversarial multi objective one shot neural architecture search |
topic | Tiny neural network architecture search Adversarial attack One-shot learning Multi-objective optimization |
url | https://doi.org/10.1007/s40747-023-01139-8 |
work_keys_str_mv | AT guoyangxie tinyadversarialmultiobjectiveoneshotneuralarchitecturesearch AT jinbaowang tinyadversarialmultiobjectiveoneshotneuralarchitecturesearch AT guoyu tinyadversarialmultiobjectiveoneshotneuralarchitecturesearch AT jiayilyu tinyadversarialmultiobjectiveoneshotneuralarchitecturesearch AT fengzheng tinyadversarialmultiobjectiveoneshotneuralarchitecturesearch AT yaochujin tinyadversarialmultiobjectiveoneshotneuralarchitecturesearch |