Supporting Trustworthy AI Through Machine Unlearning
Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to tra...
Main Authors: | Hine, E, Novelli, C, Taddeo, M, Floridi, L |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Springer
2024
|
Similar Items
-
AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act
by: Novelli, C, et al.
Published: (2024) -
Taking AI risks seriously: a new assessment model for the AI Act
by: Novelli, C, et al.
Published: (2023) -
Corrective machine unlearning
by: Goel, S, et al.
Published: (2024) -
Coded Machine Unlearning
by: Nasser Aldaghri, et al.
Published: (2021-01-01) -
Review of Machine Unlearning
by: HE Lisong, YANG Yang
Published: (2024-11-01)