Towards Certificated Model Robustness Against Weight Perturbations

<jats:p>This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding to a newly developed threat model that perturbs the neural network parameters. We propose an efficient approach to compute a certified robustness bound of weight perturbations, within wh...

Full description

Bibliographic Details
Main Authors: Weng, Tsui-Wei, Zhao, Pu, Liu, Sijia, Chen, Pin-Yu, Lin, Xue, Daniel, Luca
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: Association for the Advancement of Artificial Intelligence (AAAI) 2022
Online Access:https://hdl.handle.net/1721.1/143107
_version_ 1826193672925872128
author Weng, Tsui-Wei
Zhao, Pu
Liu, Sijia
Chen, Pin-Yu
Lin, Xue
Daniel, Luca
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Weng, Tsui-Wei
Zhao, Pu
Liu, Sijia
Chen, Pin-Yu
Lin, Xue
Daniel, Luca
author_sort Weng, Tsui-Wei
collection MIT
description <jats:p>This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding to a newly developed threat model that perturbs the neural network parameters. We propose an efficient approach to compute a certified robustness bound of weight perturbations, within which neural networks will not make erroneous outputs as desired by the adversary. In addition, we identify a useful connection between our developed certification method and the problem of weight quantization, a popular model compression technique in deep neural networks (DNNs) and a ‘must-try’ step in the design of DNN inference engines on resource constrained computing platforms, such as mobiles, FPGA, and ASIC. Specifically, we study the problem of weight quantization – weight perturbations in the non-adversarial setting – through the lens of certificated robustness, and we demonstrate significant improvements on the generalization ability of quantized networks through our robustness-aware quantization scheme.</jats:p>
first_indexed 2024-09-23T09:42:52Z
format Article
id mit-1721.1/143107
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T09:42:52Z
publishDate 2022
publisher Association for the Advancement of Artificial Intelligence (AAAI)
record_format dspace
spelling mit-1721.1/1431072023-02-08T21:12:38Z Towards Certificated Model Robustness Against Weight Perturbations Weng, Tsui-Wei Zhao, Pu Liu, Sijia Chen, Pin-Yu Lin, Xue Daniel, Luca Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Research Laboratory of Electronics MIT-IBM Watson AI Lab <jats:p>This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding to a newly developed threat model that perturbs the neural network parameters. We propose an efficient approach to compute a certified robustness bound of weight perturbations, within which neural networks will not make erroneous outputs as desired by the adversary. In addition, we identify a useful connection between our developed certification method and the problem of weight quantization, a popular model compression technique in deep neural networks (DNNs) and a ‘must-try’ step in the design of DNN inference engines on resource constrained computing platforms, such as mobiles, FPGA, and ASIC. Specifically, we study the problem of weight quantization – weight perturbations in the non-adversarial setting – through the lens of certificated robustness, and we demonstrate significant improvements on the generalization ability of quantized networks through our robustness-aware quantization scheme.</jats:p> 2022-06-13T18:44:44Z 2022-06-13T18:44:44Z 2020 2022-06-13T18:34:12Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/143107 Weng, Tsui-Wei, Zhao, Pu, Liu, Sijia, Chen, Pin-Yu, Lin, Xue et al. 2020. "Towards Certificated Model Robustness Against Weight Perturbations." Proceedings of the AAAI Conference on Artificial Intelligence, 34 (04). en 10.1609/AAAI.V34I04.6105 Proceedings of the AAAI Conference on Artificial Intelligence Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Association for the Advancement of Artificial Intelligence (AAAI) Association for the Advancement of Artificial Intelligence (AAAI)
spellingShingle Weng, Tsui-Wei
Zhao, Pu
Liu, Sijia
Chen, Pin-Yu
Lin, Xue
Daniel, Luca
Towards Certificated Model Robustness Against Weight Perturbations
title Towards Certificated Model Robustness Against Weight Perturbations
title_full Towards Certificated Model Robustness Against Weight Perturbations
title_fullStr Towards Certificated Model Robustness Against Weight Perturbations
title_full_unstemmed Towards Certificated Model Robustness Against Weight Perturbations
title_short Towards Certificated Model Robustness Against Weight Perturbations
title_sort towards certificated model robustness against weight perturbations
url https://hdl.handle.net/1721.1/143107
work_keys_str_mv AT wengtsuiwei towardscertificatedmodelrobustnessagainstweightperturbations
AT zhaopu towardscertificatedmodelrobustnessagainstweightperturbations
AT liusijia towardscertificatedmodelrobustnessagainstweightperturbations
AT chenpinyu towardscertificatedmodelrobustnessagainstweightperturbations
AT linxue towardscertificatedmodelrobustnessagainstweightperturbations
AT danielluca towardscertificatedmodelrobustnessagainstweightperturbations