Supporting Trustworthy AI Through Machine Unlearning
Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to tra...
Main Authors: | , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Springer
2024
|
_version_ | 1811141275964932096 |
---|---|
author | Hine, E Novelli, C Taddeo, M Floridi, L |
author_facet | Hine, E Novelli, C Taddeo, M Floridi, L |
author_sort | Hine, E |
collection | OXFORD |
description | Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology. |
first_indexed | 2024-09-25T04:35:18Z |
format | Journal article |
id | oxford-uuid:939049af-df0b-4443-9331-e2fb10244db9 |
institution | University of Oxford |
language | English |
last_indexed | 2024-09-25T04:35:18Z |
publishDate | 2024 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:939049af-df0b-4443-9331-e2fb10244db92024-09-12T20:08:14ZSupporting Trustworthy AI Through Machine UnlearningJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:939049af-df0b-4443-9331-e2fb10244db9EnglishJisc Publications RouterSpringer2024Hine, ENovelli, CTaddeo, MFloridi, LMachine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology. |
spellingShingle | Hine, E Novelli, C Taddeo, M Floridi, L Supporting Trustworthy AI Through Machine Unlearning |
title | Supporting Trustworthy AI Through Machine Unlearning |
title_full | Supporting Trustworthy AI Through Machine Unlearning |
title_fullStr | Supporting Trustworthy AI Through Machine Unlearning |
title_full_unstemmed | Supporting Trustworthy AI Through Machine Unlearning |
title_short | Supporting Trustworthy AI Through Machine Unlearning |
title_sort | supporting trustworthy ai through machine unlearning |
work_keys_str_mv | AT hinee supportingtrustworthyaithroughmachineunlearning AT novellic supportingtrustworthyaithroughmachineunlearning AT taddeom supportingtrustworthyaithroughmachineunlearning AT floridil supportingtrustworthyaithroughmachineunlearning |