Towards verifying robustness of neural networks against a family of semantic perturbations
Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. While current verification methods mainly focus on the p-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large p-norm perturba...
Main Authors: | Mohapatra, Jeet, Weng, Tsui-Wei, Chen, Pin-Yu, Liu, Sijia, Daniel, Luca |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Article |
Language: | English |
Published: |
IEEE
2021
|
Online Access: | https://hdl.handle.net/1721.1/130001 |
Similar Items
-
Towards Certificated Model Robustness Against Weight Perturbations
by: Weng, Tsui-Wei, et al.
Published: (2022) -
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
by: Boopathy, Akhilan, et al.
Published: (2021) -
Hidden Cost of Randomized Smoothing
by: Mohapatra, Jeet, et al.
Published: (2022) -
POPQORN: Quantifying robustness of recurrent neural networks
by: Weng, Tsui-Wei, et al.
Published: (2021) -
Efficient Neural Network Robustness Certification with General Activation Functions
by: Zhang, Huan, et al.
Published: (2021)