Improving Adversarial Robustness via Distillation-Based Purification
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an important resear...
Main Authors: | Inhwa Koo, Dong-Kyu Chae, Sang-Chul Lee |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-10-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/13/20/11313 |
Similar Items
-
Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
by: Bin WANG, Simin LI, Yaguan QIAN, Jun ZHANG, Chaohao LI, Chenming ZHU, Hongfei ZHANG
Published: (2022-12-01) -
Purifying Adversarial Images Using Adversarial Autoencoder With Conditional Normalizing Flows
by: Yi Ji, et al.
Published: (2023-01-01) -
A Survey on Efficient Methods for Adversarial Robustness
by: Awais Muhammad, et al.
Published: (2022-01-01) -
Gaussian class-conditional simplex loss for accurate, adversarially robust deep classifier training
by: Arslan Ali, et al.
Published: (2023-03-01) -
CANARY: An Adversarial Robustness Evaluation Platform for Deep Learning Models on Image Classification
by: Jiazheng Sun, et al.
Published: (2023-08-01)