DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning
Abstract Ensuring the transparency of machine learning models is vital for their ethical application in various industries. There has been a concurrent trend of distributed machine learning designed to limit access to training data for privacy concerns. Such models, trained over horizontally or vert...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer Nature
2023-07-01
|
Series: | Human-Centric Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s44230-023-00032-4 |
_version_ | 1797636794837106688 |
---|---|
author | Anna Bogdanova Akira Imakura Tetsuya Sakurai |
author_facet | Anna Bogdanova Akira Imakura Tetsuya Sakurai |
author_sort | Anna Bogdanova |
collection | DOAJ |
description | Abstract Ensuring the transparency of machine learning models is vital for their ethical application in various industries. There has been a concurrent trend of distributed machine learning designed to limit access to training data for privacy concerns. Such models, trained over horizontally or vertically partitioned data, present a challenge for explainable AI because the explaining party may have a biased view of background data or a partial view of the feature space. As a result, explanations obtained from different participants of distributed machine learning might not be consistent with one another, undermining trust in the product. This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm (KernelSHAP) and Data Collaboration method of privacy-preserving distributed machine learning. In particular, we present three algorithms for different scenarios of explainability in Data Collaboration and verify their consistency with experiments on open-access datasets. Our results demonstrated a significant (by at least a factor of 1.75) decrease in feature attribution discrepancies among the users of distributed machine learning. The proposed method improves consistency among explanations obtained from different participants, which can enhance trust in the product and enable ethical application in various industries. |
first_indexed | 2024-03-11T12:40:15Z |
format | Article |
id | doaj.art-086ddacd410c4e78b02820459cc975ce |
institution | Directory Open Access Journal |
issn | 2667-1336 |
language | English |
last_indexed | 2024-03-11T12:40:15Z |
publishDate | 2023-07-01 |
publisher | Springer Nature |
record_format | Article |
series | Human-Centric Intelligent Systems |
spelling | doaj.art-086ddacd410c4e78b02820459cc975ce2023-11-05T12:20:13ZengSpringer NatureHuman-Centric Intelligent Systems2667-13362023-07-013319721010.1007/s44230-023-00032-4DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine LearningAnna Bogdanova0Akira Imakura1Tetsuya Sakurai2Department of Computer Science, University of TsukubaDepartment of Computer Science, University of TsukubaDepartment of Computer Science, University of TsukubaAbstract Ensuring the transparency of machine learning models is vital for their ethical application in various industries. There has been a concurrent trend of distributed machine learning designed to limit access to training data for privacy concerns. Such models, trained over horizontally or vertically partitioned data, present a challenge for explainable AI because the explaining party may have a biased view of background data or a partial view of the feature space. As a result, explanations obtained from different participants of distributed machine learning might not be consistent with one another, undermining trust in the product. This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm (KernelSHAP) and Data Collaboration method of privacy-preserving distributed machine learning. In particular, we present three algorithms for different scenarios of explainability in Data Collaboration and verify their consistency with experiments on open-access datasets. Our results demonstrated a significant (by at least a factor of 1.75) decrease in feature attribution discrepancies among the users of distributed machine learning. The proposed method improves consistency among explanations obtained from different participants, which can enhance trust in the product and enable ethical application in various industries.https://doi.org/10.1007/s44230-023-00032-4Distributed machine learningExplainabilityFederated learningData collaboration |
spellingShingle | Anna Bogdanova Akira Imakura Tetsuya Sakurai DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning Human-Centric Intelligent Systems Distributed machine learning Explainability Federated learning Data collaboration |
title | DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning |
title_full | DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning |
title_fullStr | DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning |
title_full_unstemmed | DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning |
title_short | DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning |
title_sort | dc shap method for consistent explainability in privacy preserving distributed machine learning |
topic | Distributed machine learning Explainability Federated learning Data collaboration |
url | https://doi.org/10.1007/s44230-023-00032-4 |
work_keys_str_mv | AT annabogdanova dcshapmethodforconsistentexplainabilityinprivacypreservingdistributedmachinelearning AT akiraimakura dcshapmethodforconsistentexplainabilityinprivacypreservingdistributedmachinelearning AT tetsuyasakurai dcshapmethodforconsistentexplainabilityinprivacypreservingdistributedmachinelearning |