Robust decision trees against adversarial examples

© 2019 by the Author(S). Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-bascd models robust against adversarial examples is still limited. In this p...

Full description

Bibliographic Details
Main Authors: Chen, H, Zhang, H, Boning, D, Hsieh, CJ
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/132249
_version_ 1811093535350325248
author Chen, H
Zhang, H
Boning, D
Hsieh, CJ
author_facet Chen, H
Zhang, H
Boning, D
Hsieh, CJ
author_sort Chen, H
collection MIT
description © 2019 by the Author(S). Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-bascd models robust against adversarial examples is still limited. In this paper, we show that tree based models are also vulnerable to adversarial examples and develop a novel algorithm to learn robust trees. At its core, our method aims to optimize the performance under the worst-case perturbation of input features, which leads to a max-min saddle point problem. Incorporating this saddle point objective into the decision tree building procedure is non-trivial due to the discrete nature of trees - a naive approach to finding the best split according to this saddle point objective will take exponential time. To make our approach practical and scalable, we propose efficient tree building algorithms by approximating the inner minimizer in this saddle point problem, and present efficient implementations for classical information gain based trees as well as state-of-the-art tree boosting models such as XG-Boost. Experimental results on real world datasets demonstrate that the proposed algorithms can substantially improve the robustness of tree-based models against adversarial examples.
first_indexed 2024-09-23T15:46:39Z
format Article
id mit-1721.1/132249
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T15:46:39Z
publishDate 2021
record_format dspace
spelling mit-1721.1/1322492021-09-21T03:09:55Z Robust decision trees against adversarial examples Chen, H Zhang, H Boning, D Hsieh, CJ © 2019 by the Author(S). Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-bascd models robust against adversarial examples is still limited. In this paper, we show that tree based models are also vulnerable to adversarial examples and develop a novel algorithm to learn robust trees. At its core, our method aims to optimize the performance under the worst-case perturbation of input features, which leads to a max-min saddle point problem. Incorporating this saddle point objective into the decision tree building procedure is non-trivial due to the discrete nature of trees - a naive approach to finding the best split according to this saddle point objective will take exponential time. To make our approach practical and scalable, we propose efficient tree building algorithms by approximating the inner minimizer in this saddle point problem, and present efficient implementations for classical information gain based trees as well as state-of-the-art tree boosting models such as XG-Boost. Experimental results on real world datasets demonstrate that the proposed algorithms can substantially improve the robustness of tree-based models against adversarial examples. 2021-09-20T18:21:28Z 2021-09-20T18:21:28Z 2020-12-03T15:27:16Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/132249 en http://proceedings.mlr.press/v97/ 36th International Conference on Machine Learning, ICML 2019 Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf arXiv
spellingShingle Chen, H
Zhang, H
Boning, D
Hsieh, CJ
Robust decision trees against adversarial examples
title Robust decision trees against adversarial examples
title_full Robust decision trees against adversarial examples
title_fullStr Robust decision trees against adversarial examples
title_full_unstemmed Robust decision trees against adversarial examples
title_short Robust decision trees against adversarial examples
title_sort robust decision trees against adversarial examples
url https://hdl.handle.net/1721.1/132249
work_keys_str_mv AT chenh robustdecisiontreesagainstadversarialexamples
AT zhangh robustdecisiontreesagainstadversarialexamples
AT boningd robustdecisiontreesagainstadversarialexamples
AT hsiehcj robustdecisiontreesagainstadversarialexamples