On the hardness of robust classification

It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particula...

Full description

Bibliographic Details
Main Authors: Gourdeau, P, Kanade, V, Kwiatkowska, M, Worrell, J
Format: Conference item
Language:English
Published: Neural Information Processing Systems Foundation 2019
_version_ 1797059217232756736
author Gourdeau, P
Kanade, V
Kwiatkowska, M
Worrell, J
author_facet Gourdeau, P
Kanade, V
Kwiatkowska, M
Worrell, J
author_sort Gourdeau, P
collection OXFORD
description It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We show moreover that the class of monotone conjunctions cannot be robustly learned under the uniform distribution against an adversary who can perturb input bits. However if the adversary is restricted to perturbing bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution). Finally, we provide a simple proof of the computational hardness of robust learning on the boolean hypercube. Unlike previous results of this nature, our result does not rely on another computational model (e.g. the statistical query model) nor on any hardness assumption other than the existence of a hard learning problem in the PAC framework.
first_indexed 2024-03-06T20:01:01Z
format Conference item
id oxford-uuid:274bdf38-35f5-41f4-9c53-76527d74f19a
institution University of Oxford
language English
last_indexed 2024-03-06T20:01:01Z
publishDate 2019
publisher Neural Information Processing Systems Foundation
record_format dspace
spelling oxford-uuid:274bdf38-35f5-41f4-9c53-76527d74f19a2022-03-26T12:06:06ZOn the hardness of robust classificationConference itemhttp://purl.org/coar/resource_type/c_5794uuid:274bdf38-35f5-41f4-9c53-76527d74f19aEnglishSymplectic Elements at OxfordNeural Information Processing Systems Foundation2019Gourdeau, PKanade, VKwiatkowska, MWorrell, JIt is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We show moreover that the class of monotone conjunctions cannot be robustly learned under the uniform distribution against an adversary who can perturb input bits. However if the adversary is restricted to perturbing bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution). Finally, we provide a simple proof of the computational hardness of robust learning on the boolean hypercube. Unlike previous results of this nature, our result does not rely on another computational model (e.g. the statistical query model) nor on any hardness assumption other than the existence of a hard learning problem in the PAC framework.
spellingShingle Gourdeau, P
Kanade, V
Kwiatkowska, M
Worrell, J
On the hardness of robust classification
title On the hardness of robust classification
title_full On the hardness of robust classification
title_fullStr On the hardness of robust classification
title_full_unstemmed On the hardness of robust classification
title_short On the hardness of robust classification
title_sort on the hardness of robust classification
work_keys_str_mv AT gourdeaup onthehardnessofrobustclassification
AT kanadev onthehardnessofrobustclassification
AT kwiatkowskam onthehardnessofrobustclassification
AT worrellj onthehardnessofrobustclassification