RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks
Neural Architecture Search (NAS) algorithms aim at finding efficient Deep Neural Network (DNN) architectures for a given application under given system constraints. DNNs are computationally-complex as well as vulnerable to adversarial attacks. In order to address multiple design objectives, we propo...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9917535/ |
_version_ | 1797990677731082240 |
---|---|
author | Alberto Marchisio Vojtech Mrazek Andrea Massa Beatrice Bussolino Maurizio Martina Muhammad Shafique |
author_facet | Alberto Marchisio Vojtech Mrazek Andrea Massa Beatrice Bussolino Maurizio Martina Muhammad Shafique |
author_sort | Alberto Marchisio |
collection | DOAJ |
description | Neural Architecture Search (NAS) algorithms aim at finding efficient Deep Neural Network (DNN) architectures for a given application under given system constraints. DNNs are computationally-complex as well as vulnerable to adversarial attacks. In order to address multiple design objectives, we propose <italic>RoHNAS</italic>, a novel NAS framework that jointly optimizes for adversarial-robustness and hardware-efficiency of DNNs executed on specialized hardware accelerators. Besides the traditional convolutional DNNs, <italic>RoHNAS</italic> additionally accounts for complex types of DNNs such as Capsule Networks. For reducing the exploration time, <italic>RoHNAS</italic> analyzes and selects appropriate values of adversarial perturbation for each dataset to employ in the NAS flow. Extensive evaluations on multi - Graphics Processing Unit (GPU) - High Performance Computing (HPC) nodes provide a set of Pareto-optimal solutions, leveraging the tradeoff between the above-discussed design objectives. For example, a Pareto-optimal DNN for the CIFAR-10 dataset exhibits 86.07% accuracy, while having an energy of 38.63 mJ, a memory footprint of 11.85 MiB, and a latency of 4.47 ms. |
first_indexed | 2024-04-11T08:40:22Z |
format | Article |
id | doaj.art-d4901f91d35a4cef985e24ffbf45549c |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-04-11T08:40:22Z |
publishDate | 2022-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-d4901f91d35a4cef985e24ffbf45549c2022-12-22T04:34:14ZengIEEEIEEE Access2169-35362022-01-011010904310905510.1109/ACCESS.2022.32143129917535RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule NetworksAlberto Marchisio0https://orcid.org/0000-0002-0689-4776Vojtech Mrazek1https://orcid.org/0000-0002-9399-9313Andrea Massa2Beatrice Bussolino3https://orcid.org/0000-0003-2608-820XMaurizio Martina4https://orcid.org/0000-0002-3069-0319Muhammad Shafique5https://orcid.org/0000-0002-2607-8135Institute of Computer Engineering, Technische Universität Wien (TU Wien), Embedded Computing Systems Group, Vienna, AustriaEvolvable Hardware Research Group, Faculty of Information Technology, Brno University of Technology, Brno, Czech RepublicDepartment of Electronics and Telecommunications, VLSI Laboratory, Politecnico di Torino, Turin, ItalyDepartment of Electronics and Telecommunications, VLSI Laboratory, Politecnico di Torino, Turin, ItalyDepartment of Electronics and Telecommunications, VLSI Laboratory, Politecnico di Torino, Turin, ItalyEBrain Laboratory, Division of Engineering, New York University Abu Dhabi, Abu Dhabi, United Arab EmiratesNeural Architecture Search (NAS) algorithms aim at finding efficient Deep Neural Network (DNN) architectures for a given application under given system constraints. DNNs are computationally-complex as well as vulnerable to adversarial attacks. In order to address multiple design objectives, we propose <italic>RoHNAS</italic>, a novel NAS framework that jointly optimizes for adversarial-robustness and hardware-efficiency of DNNs executed on specialized hardware accelerators. Besides the traditional convolutional DNNs, <italic>RoHNAS</italic> additionally accounts for complex types of DNNs such as Capsule Networks. For reducing the exploration time, <italic>RoHNAS</italic> analyzes and selects appropriate values of adversarial perturbation for each dataset to employ in the NAS flow. Extensive evaluations on multi - Graphics Processing Unit (GPU) - High Performance Computing (HPC) nodes provide a set of Pareto-optimal solutions, leveraging the tradeoff between the above-discussed design objectives. For example, a Pareto-optimal DNN for the CIFAR-10 dataset exhibits 86.07% accuracy, while having an energy of 38.63 mJ, a memory footprint of 11.85 MiB, and a latency of 4.47 ms.https://ieeexplore.ieee.org/document/9917535/Adversarial robustnessenergy efficiencylatencymemoryhardware-aware neural architecture searchevolutionary algorithm |
spellingShingle | Alberto Marchisio Vojtech Mrazek Andrea Massa Beatrice Bussolino Maurizio Martina Muhammad Shafique RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks IEEE Access Adversarial robustness energy efficiency latency memory hardware-aware neural architecture search evolutionary algorithm |
title | RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks |
title_full | RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks |
title_fullStr | RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks |
title_full_unstemmed | RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks |
title_short | RoHNAS: A Neural Architecture Search Framework With Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks |
title_sort | rohnas a neural architecture search framework with conjoint optimization for adversarial robustness and hardware efficiency of convolutional and capsule networks |
topic | Adversarial robustness energy efficiency latency memory hardware-aware neural architecture search evolutionary algorithm |
url | https://ieeexplore.ieee.org/document/9917535/ |
work_keys_str_mv | AT albertomarchisio rohnasaneuralarchitecturesearchframeworkwithconjointoptimizationforadversarialrobustnessandhardwareefficiencyofconvolutionalandcapsulenetworks AT vojtechmrazek rohnasaneuralarchitecturesearchframeworkwithconjointoptimizationforadversarialrobustnessandhardwareefficiencyofconvolutionalandcapsulenetworks AT andreamassa rohnasaneuralarchitecturesearchframeworkwithconjointoptimizationforadversarialrobustnessandhardwareefficiencyofconvolutionalandcapsulenetworks AT beatricebussolino rohnasaneuralarchitecturesearchframeworkwithconjointoptimizationforadversarialrobustnessandhardwareefficiencyofconvolutionalandcapsulenetworks AT mauriziomartina rohnasaneuralarchitecturesearchframeworkwithconjointoptimizationforadversarialrobustnessandhardwareefficiencyofconvolutionalandcapsulenetworks AT muhammadshafique rohnasaneuralarchitecturesearchframeworkwithconjointoptimizationforadversarialrobustnessandhardwareefficiencyofconvolutionalandcapsulenetworks |