Corrective machine unlearning

Machine Learning models increasingly face data integrity challenges due to the use of large-scale training datasets drawn from the internet. We study what model developers can do if they detect that some data was manipulated or incorrect. Such manipulated data can cause adverse effects like vulnerab...

Full description

Bibliographic Details
Main Authors: Goel, S, Prabhu, A, Torr, P, Kumaraguru, P, Sanyal, A
Format: Conference item
Language:English
Published: OpenReview 2024
_version_ 1811140485188681728
author Goel, S
Prabhu, A
Torr, P
Kumaraguru, P
Sanyal, A
author_facet Goel, S
Prabhu, A
Torr, P
Kumaraguru, P
Sanyal, A
author_sort Goel, S
collection OXFORD
description Machine Learning models increasingly face data integrity challenges due to the use of large-scale training datasets drawn from the internet. We study what model developers can do if they detect that some data was manipulated or incorrect. Such manipulated data can cause adverse effects like vulnerability to backdoored samples, systematic biases, and in general, reduced accuracy on certain input domains. Often, all manipulated training samples are not known, and only a small, representative subset of the affected data is flagged. <br> We formalize “Corrective Machine Unlearning” as the problem of mitigating the impact of data affected by unknown manipulations on a trained model, possibly knowing only a subset of impacted samples. We demonstrate that the problem of corrective unlearning has significantly different requirements from traditional privacy-oriented unlearning. We find most existing unlearning methods, including the gold-standard retraining-from-scratch, require most of the manipulated data to be identified for effective corrective unlearning. However, one approach, SSD, achieves limited success in unlearning adverse effects with just a small portion of the manipulated samples, showing the tractability of this setting. We hope our work spurs research towards developing better methods for corrective unlearning, and offers practitioners a new strategy to handle data integrity challenges arising from web-scale training. We release our code at https://github.com/drimpossible/corrective-unlearning-bench.
first_indexed 2024-09-25T04:22:44Z
format Conference item
id oxford-uuid:cfad5544-3601-42c4-979d-4c28441f9e52
institution University of Oxford
language English
last_indexed 2024-09-25T04:22:44Z
publishDate 2024
publisher OpenReview
record_format dspace
spelling oxford-uuid:cfad5544-3601-42c4-979d-4c28441f9e522024-08-09T10:15:38ZCorrective machine unlearningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:cfad5544-3601-42c4-979d-4c28441f9e52EnglishSymplectic ElementsOpenReview2024Goel, SPrabhu, ATorr, PKumaraguru, PSanyal, AMachine Learning models increasingly face data integrity challenges due to the use of large-scale training datasets drawn from the internet. We study what model developers can do if they detect that some data was manipulated or incorrect. Such manipulated data can cause adverse effects like vulnerability to backdoored samples, systematic biases, and in general, reduced accuracy on certain input domains. Often, all manipulated training samples are not known, and only a small, representative subset of the affected data is flagged. <br> We formalize “Corrective Machine Unlearning” as the problem of mitigating the impact of data affected by unknown manipulations on a trained model, possibly knowing only a subset of impacted samples. We demonstrate that the problem of corrective unlearning has significantly different requirements from traditional privacy-oriented unlearning. We find most existing unlearning methods, including the gold-standard retraining-from-scratch, require most of the manipulated data to be identified for effective corrective unlearning. However, one approach, SSD, achieves limited success in unlearning adverse effects with just a small portion of the manipulated samples, showing the tractability of this setting. We hope our work spurs research towards developing better methods for corrective unlearning, and offers practitioners a new strategy to handle data integrity challenges arising from web-scale training. We release our code at https://github.com/drimpossible/corrective-unlearning-bench.
spellingShingle Goel, S
Prabhu, A
Torr, P
Kumaraguru, P
Sanyal, A
Corrective machine unlearning
title Corrective machine unlearning
title_full Corrective machine unlearning
title_fullStr Corrective machine unlearning
title_full_unstemmed Corrective machine unlearning
title_short Corrective machine unlearning
title_sort corrective machine unlearning
work_keys_str_mv AT goels correctivemachineunlearning
AT prabhua correctivemachineunlearning
AT torrp correctivemachineunlearning
AT kumaragurup correctivemachineunlearning
AT sanyala correctivemachineunlearning