Evaluation of Model Quantization Method on Vitis-AI for Mitigating Adversarial Examples

Adversarial examples (AEs) are typical model evasion attacks and security threats in deep neural networks (DNNs). One of the countermeasures is adversarial training (AT), and it trains DNNs by using a training dataset containing AEs to achieve robustness against AEs. On the other hand, the robustnes...

Full description

Bibliographic Details
Main Authors: Yuta Fukuda, Kota Yoshida, Takeshi Fujino
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10216964/
Description
Summary:Adversarial examples (AEs) are typical model evasion attacks and security threats in deep neural networks (DNNs). One of the countermeasures is adversarial training (AT), and it trains DNNs by using a training dataset containing AEs to achieve robustness against AEs. On the other hand, the robustness obtained by AT greatly decreases when its parameters are quantized from a 32-bit float into an 8-bit integer to execute DNNs on edge devices with restricted hardware resources. Preliminary experiments in this study show that robustness is reduced by the fine-tuning process, in which a quantized model is trained with clean samples to reduce quantization errors. We propose quantization-aware adversarial training (QAAT) to address this problem, optimizing DNNs by conducting AT in quantization flow. In this study, we constructed a QAAT model using Vitis-AI provided by Xilinx. We implemented the QAAT model on the evaluation board ZCU104, equipped with Zynq UltraScale+, and demonstrate the robustness against AEs.
ISSN:2169-3536