Deux machina: a cross-disciplinary approach to artificial intelligence for regulatory understanding

Legal scholars, policymakers and artificial intelligence researchers disagree about how to typify and describe artificial intelligence; the goalposts are constantly shifting with the advancement of technology, but this is an ancillary matter. Definitions aside, sophisticated decision-making technolo...

Full description

Bibliographic Details
Main Author: Elliott-Renhard, A
Other Authors: Pila, J
Format: Thesis
Language:English
Published: 2021
Subjects:
Description
Summary:Legal scholars, policymakers and artificial intelligence researchers disagree about how to typify and describe artificial intelligence; the goalposts are constantly shifting with the advancement of technology, but this is an ancillary matter. Definitions aside, sophisticated decision-making technology is widespread and influential, and a more important normative question needs answering: when should machines be used to make decisions? Since the law is premised on clear avenues of responsibility, rationality, and objective justifications for decisions, the prospect of complex, non-human decision-makers tendering inscrutable decisions with considerable utility but ineffable reasoning is a novelty without comparison in the realm of technology regulation. I contend that this novelty necessitates a deeper, contextual understanding of artificially-intelligent decision-making technology to provide a foundation for regulatory answers to the normative question and resolve ambiguity that pervades more general discussion. Meaningful answers cannot be obtained in the abstract and are tied to the domain in which machine decisions are made; broad doctrinal approaches that begin from the law, and take a technology-neutral approach to regulation, are thus inadequate. Medicine, a science and an art with a long history of integration of artificially-intelligent technology, provides a useful domain for close inquiry. Reliance on machine decision-making by physicians and patients in clinical medical contexts reveals important considerations that high-level discussions do not uncover; the most important of these is that data-driven machines reason in a manner that is alien to human reasoning, and this incongruity is the source of the unintelligibility described vaguely as the ‘black box’ of machine reasoning. Nonetheless, carefully designed machine reasoning is consistent, insightful, and useful, and should be constrained by predefined structural roles for machines, rather than human notions of explanation. Comprehensive, unambiguous understanding is a fundamental prerequisite to the regulatory prescription of suitable roles for machines, and understanding is what I provide here.