Algorithms that remember: model inversion attacks and data protection law

Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of m...

Full description

Bibliographic Details
Main Authors: Veale, M, Binns, R, Edwards, L
Format: Journal article
Published: Royal Society 2018
_version_ 1826270097625317376
author Veale, M
Binns, R
Edwards, L
author_facet Veale, M
Binns, R
Edwards, L
author_sort Veale, M
collection OXFORD
description Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around ‘model inversion’ and ‘membership inference’ attacks, which indicates that the process of turning training data into machine-learned systems is not one way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.
first_indexed 2024-03-06T21:35:31Z
format Journal article
id oxford-uuid:461fa4e2-226d-4341-99ba-3640f0bd38d8
institution University of Oxford
last_indexed 2024-03-06T21:35:31Z
publishDate 2018
publisher Royal Society
record_format dspace
spelling oxford-uuid:461fa4e2-226d-4341-99ba-3640f0bd38d82022-03-26T15:11:51ZAlgorithms that remember: model inversion attacks and data protection lawJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:461fa4e2-226d-4341-99ba-3640f0bd38d8Symplectic Elements at OxfordRoyal Society2018Veale, MBinns, REdwards, LMany individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around ‘model inversion’ and ‘membership inference’ attacks, which indicates that the process of turning training data into machine-learned systems is not one way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.
spellingShingle Veale, M
Binns, R
Edwards, L
Algorithms that remember: model inversion attacks and data protection law
title Algorithms that remember: model inversion attacks and data protection law
title_full Algorithms that remember: model inversion attacks and data protection law
title_fullStr Algorithms that remember: model inversion attacks and data protection law
title_full_unstemmed Algorithms that remember: model inversion attacks and data protection law
title_short Algorithms that remember: model inversion attacks and data protection law
title_sort algorithms that remember model inversion attacks and data protection law
work_keys_str_mv AT vealem algorithmsthatremembermodelinversionattacksanddataprotectionlaw
AT binnsr algorithmsthatremembermodelinversionattacksanddataprotectionlaw
AT edwardsl algorithmsthatremembermodelinversionattacksanddataprotectionlaw