Documentation as a Tool for Algorithmic Accountability
This thesis argues that civil liability should rest on the deployer's understanding of system behavior, and that documentation is the necessary tool to accomplish this goal. This work begins by establishing the ``hole'' in current approaches to AI risk regulation, the lack of a civil...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2024
|
Online Access: | https://hdl.handle.net/1721.1/157026 https://orcid.org/0009-0003-6408-2009 |
_version_ | 1824458326641999872 |
---|---|
author | Curtis, Taylor Lynn |
author2 | Hadfield-Menell, Dylan |
author_facet | Hadfield-Menell, Dylan Curtis, Taylor Lynn |
author_sort | Curtis, Taylor Lynn |
collection | MIT |
description | This thesis argues that civil liability should rest on the deployer's understanding of system behavior, and that documentation is the necessary tool to accomplish this goal. This work begins by establishing the ``hole'' in current approaches to AI risk regulation, the lack of a civil liability regime. It also highlights that civil liability is an already existing and effective regulatory tool that can be applied to AI. The rest of this thesis develops this argument by looking at what is necessary for such a framework to exist. It argues that an understanding of system behaviour is essential and achievable through documentation. It is divided into two substantive chapters. Firstly, Chapter 2 outlines how system behaviour can inform policy through documentation, linking the necessity of documentation to liability and proposing a concrete liability scheme based on documenting system understanding. Secondly, Chapter 3 discusses how documentation can alter a person's understanding of system behaviour, presenting a user study that demonstrates how system understanding can be achieved through documentation and structured data interaction. It argues that testing and system understanding are not insurmountable challenges and that by engaging in a relatively simple process, AI deployers can better understand the behaviour of their models. Overall, this thesis provides a methodical guide to understanding AI system behaviour and the establishment of a new pathway for effective regulation, arguing for the understanding of system behaviour and documentation at deployment as the path forward to achieve civil liability in AI. |
first_indexed | 2025-02-19T04:24:07Z |
format | Thesis |
id | mit-1721.1/157026 |
institution | Massachusetts Institute of Technology |
last_indexed | 2025-02-19T04:24:07Z |
publishDate | 2024 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1570262024-09-25T03:16:54Z Documentation as a Tool for Algorithmic Accountability Curtis, Taylor Lynn Hadfield-Menell, Dylan Massachusetts Institute of Technology. Institute for Data, Systems, and Society Technology and Policy Program This thesis argues that civil liability should rest on the deployer's understanding of system behavior, and that documentation is the necessary tool to accomplish this goal. This work begins by establishing the ``hole'' in current approaches to AI risk regulation, the lack of a civil liability regime. It also highlights that civil liability is an already existing and effective regulatory tool that can be applied to AI. The rest of this thesis develops this argument by looking at what is necessary for such a framework to exist. It argues that an understanding of system behaviour is essential and achievable through documentation. It is divided into two substantive chapters. Firstly, Chapter 2 outlines how system behaviour can inform policy through documentation, linking the necessity of documentation to liability and proposing a concrete liability scheme based on documenting system understanding. Secondly, Chapter 3 discusses how documentation can alter a person's understanding of system behaviour, presenting a user study that demonstrates how system understanding can be achieved through documentation and structured data interaction. It argues that testing and system understanding are not insurmountable challenges and that by engaging in a relatively simple process, AI deployers can better understand the behaviour of their models. Overall, this thesis provides a methodical guide to understanding AI system behaviour and the establishment of a new pathway for effective regulation, arguing for the understanding of system behaviour and documentation at deployment as the path forward to achieve civil liability in AI. S.M. 2024-09-24T18:27:31Z 2024-09-24T18:27:31Z 2024-05 2024-07-25T14:17:35.944Z Thesis https://hdl.handle.net/1721.1/157026 https://orcid.org/0009-0003-6408-2009 Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Curtis, Taylor Lynn Documentation as a Tool for Algorithmic Accountability |
title | Documentation as a Tool for Algorithmic Accountability |
title_full | Documentation as a Tool for Algorithmic Accountability |
title_fullStr | Documentation as a Tool for Algorithmic Accountability |
title_full_unstemmed | Documentation as a Tool for Algorithmic Accountability |
title_short | Documentation as a Tool for Algorithmic Accountability |
title_sort | documentation as a tool for algorithmic accountability |
url | https://hdl.handle.net/1721.1/157026 https://orcid.org/0009-0003-6408-2009 |
work_keys_str_mv | AT curtistaylorlynn documentationasatoolforalgorithmicaccountability |