Privacy and Explainability: The Effects of Data Protection on Shapley Values

There is an increasing need to provide explainability for machine learning models. There are different alternatives to provide explainability, for example, local and global methods. One of the approaches is based on Shapley values. Privacy is another critical requirement when dealing with sensitive...

Full description

Bibliographic Details
Main Authors: Aso Bozorgpanah, Vicenç Torra, Laya Aliahmadipour
Format: Article
Language:English
Published: MDPI AG 2022-12-01
Series:Technologies
Subjects:
Online Access:https://www.mdpi.com/2227-7080/10/6/125
Description
Summary:There is an increasing need to provide explainability for machine learning models. There are different alternatives to provide explainability, for example, local and global methods. One of the approaches is based on Shapley values. Privacy is another critical requirement when dealing with sensitive data. Data-driven machine learning models may lead to disclosure. Data privacy provides several methods for ensuring privacy. In this paper, we study how methods for explainability based on Shapley values are affected by privacy methods. We show that some degree of protection still permits to maintain the information of Shapley values for the four machine learning models studied. Experiments seem to indicate that among the four models, Shapley values of linear models are the most affected ones.
ISSN:2227-7080