Learning Explainable Decision Rules via Maximum Satisfiability
Decision trees are a popular choice for providing explainable machine learning, since they make explicit how different features contribute towards the prediction. We apply tools from constraint satisfaction to learn optimal decision trees in the form of sparse k-CNF (Conjunctive Normal Form) rules....
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9272729/ |
_version_ | 1819047510667165696 |
---|---|
author | Henrik E. C. Cao Riku Sarlin Alexander Jung |
author_facet | Henrik E. C. Cao Riku Sarlin Alexander Jung |
author_sort | Henrik E. C. Cao |
collection | DOAJ |
description | Decision trees are a popular choice for providing explainable machine learning, since they make explicit how different features contribute towards the prediction. We apply tools from constraint satisfaction to learn optimal decision trees in the form of sparse k-CNF (Conjunctive Normal Form) rules. We develop two methods offering different trade-offs between accuracy and computational complexity: one offline method that learns decision trees using the entire training dataset and one online method that learns decision trees over a local subset of the training dataset. This subset is obtained from training examples near a query point. The developed methods are applied on a number of datasets both in an online and an offline setting. We found that our methods learn decision trees which are significantly more accurate than those learned by existing heuristic approaches. However, the global decision tree model tends to be computationally more expensive compared to heuristic approaches. The online method is faster to train and finds smaller decision trees with an accuracy comparable to that of the k-nearest-neighbour method. |
first_indexed | 2024-12-21T11:01:31Z |
format | Article |
id | doaj.art-da5f91002b724b238ff57d660e56dffa |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-21T11:01:31Z |
publishDate | 2020-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-da5f91002b724b238ff57d660e56dffa2022-12-21T19:06:20ZengIEEEIEEE Access2169-35362020-01-01821818021818510.1109/ACCESS.2020.30410409272729Learning Explainable Decision Rules via Maximum SatisfiabilityHenrik E. C. Cao0https://orcid.org/0000-0003-0525-4344Riku Sarlin1https://orcid.org/0000-0002-0708-1830Alexander Jung2https://orcid.org/0000-0001-7538-0990Department of Computer Science, Aalto University, Espoo, FinlandSocial Insurance Institution of Finland, Helsinki, FinlandDepartment of Computer Science, Aalto University, Espoo, FinlandDecision trees are a popular choice for providing explainable machine learning, since they make explicit how different features contribute towards the prediction. We apply tools from constraint satisfaction to learn optimal decision trees in the form of sparse k-CNF (Conjunctive Normal Form) rules. We develop two methods offering different trade-offs between accuracy and computational complexity: one offline method that learns decision trees using the entire training dataset and one online method that learns decision trees over a local subset of the training dataset. This subset is obtained from training examples near a query point. The developed methods are applied on a number of datasets both in an online and an offline setting. We found that our methods learn decision trees which are significantly more accurate than those learned by existing heuristic approaches. However, the global decision tree model tends to be computationally more expensive compared to heuristic approaches. The online method is faster to train and finds smaller decision trees with an accuracy comparable to that of the k-nearest-neighbour method.https://ieeexplore.ieee.org/document/9272729/Classificationcombinatorial optimizationdecision treeexplainable AImachine learningweighted MaxSAT |
spellingShingle | Henrik E. C. Cao Riku Sarlin Alexander Jung Learning Explainable Decision Rules via Maximum Satisfiability IEEE Access Classification combinatorial optimization decision tree explainable AI machine learning weighted MaxSAT |
title | Learning Explainable Decision Rules via Maximum Satisfiability |
title_full | Learning Explainable Decision Rules via Maximum Satisfiability |
title_fullStr | Learning Explainable Decision Rules via Maximum Satisfiability |
title_full_unstemmed | Learning Explainable Decision Rules via Maximum Satisfiability |
title_short | Learning Explainable Decision Rules via Maximum Satisfiability |
title_sort | learning explainable decision rules via maximum satisfiability |
topic | Classification combinatorial optimization decision tree explainable AI machine learning weighted MaxSAT |
url | https://ieeexplore.ieee.org/document/9272729/ |
work_keys_str_mv | AT henrikeccao learningexplainabledecisionrulesviamaximumsatisfiability AT rikusarlin learningexplainabledecisionrulesviamaximumsatisfiability AT alexanderjung learningexplainabledecisionrulesviamaximumsatisfiability |