Limitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyond

Predictive and generative artificial intelligence (AI) have both become integral parts of our lives through their use in making highly impactful decisions. AI systems are already deployed widely—for example, in employment, healthcare, insurance, finance, education, public administration, and crimina...

Full description

Bibliographic Details
Main Author: Wachter, S
Format: Journal article
Language:English
Published: Yale Law School 2024
_version_ 1811140404972617728
author Wachter, S
author_facet Wachter, S
author_sort Wachter, S
collection OXFORD
description Predictive and generative artificial intelligence (AI) have both become integral parts of our lives through their use in making highly impactful decisions. AI systems are already deployed widely—for example, in employment, healthcare, insurance, finance, education, public administration, and criminal justice. Yet severe ethical issues, such as bias and discrimination, privacy invasiveness, opaqueness, and environmental costs of these systems, are well known. Generative AI (GAI) creates hallucinations and inaccurate or harmful information, which can lead to misinformation, disinformation, and the erosion of scientific knowledge. The Artificial Intelligence Act (AIA), Product Liability Directive, and the Artificial Intelligence Liability Directive reflect Europe’s attempt to curb some of these issues. With the legal reach of these policies going far beyond Europe, their impact on the United States and the rest of the world cannot be overstated. <br> In this Essay, I show how the strong lobbying efforts of big tech companies and member states were unfortunately able to water down much of the AIA. An overreliance on self-regulation, self-certification, weak oversight and investigatory mechanisms, and far-reaching exceptions for both the public and private sectors are the product of this lobbying. Next, I reveal the similar enforcement limitations of the liability frameworks, which focus on material harm while ignoring harm that is immaterial, monetary, and societal, such as bias, hallucinations, and financial losses due to faulty AI products. Lastly, I explore how these loopholes can be closed to create a framework that effectively guards against novel risks caused by AI in the European Union, the United States, and beyond.
first_indexed 2024-09-25T04:21:27Z
format Journal article
id oxford-uuid:0525099f-88c6-4690-abfa-741a8c057e00
institution University of Oxford
language English
last_indexed 2024-09-25T04:21:27Z
publishDate 2024
publisher Yale Law School
record_format dspace
spelling oxford-uuid:0525099f-88c6-4690-abfa-741a8c057e002024-08-15T11:50:31ZLimitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyondJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:0525099f-88c6-4690-abfa-741a8c057e00EnglishSymplectic ElementsYale Law School2024Wachter, SPredictive and generative artificial intelligence (AI) have both become integral parts of our lives through their use in making highly impactful decisions. AI systems are already deployed widely—for example, in employment, healthcare, insurance, finance, education, public administration, and criminal justice. Yet severe ethical issues, such as bias and discrimination, privacy invasiveness, opaqueness, and environmental costs of these systems, are well known. Generative AI (GAI) creates hallucinations and inaccurate or harmful information, which can lead to misinformation, disinformation, and the erosion of scientific knowledge. The Artificial Intelligence Act (AIA), Product Liability Directive, and the Artificial Intelligence Liability Directive reflect Europe’s attempt to curb some of these issues. With the legal reach of these policies going far beyond Europe, their impact on the United States and the rest of the world cannot be overstated. <br> In this Essay, I show how the strong lobbying efforts of big tech companies and member states were unfortunately able to water down much of the AIA. An overreliance on self-regulation, self-certification, weak oversight and investigatory mechanisms, and far-reaching exceptions for both the public and private sectors are the product of this lobbying. Next, I reveal the similar enforcement limitations of the liability frameworks, which focus on material harm while ignoring harm that is immaterial, monetary, and societal, such as bias, hallucinations, and financial losses due to faulty AI products. Lastly, I explore how these loopholes can be closed to create a framework that effectively guards against novel risks caused by AI in the European Union, the United States, and beyond.
spellingShingle Wachter, S
Limitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyond
title Limitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyond
title_full Limitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyond
title_fullStr Limitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyond
title_full_unstemmed Limitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyond
title_short Limitations and loopholes in the EU AI Act and AI Liability Directives: what this means for the European Union, the United States, and beyond
title_sort limitations and loopholes in the eu ai act and ai liability directives what this means for the european union the united states and beyond
work_keys_str_mv AT wachters limitationsandloopholesintheeuaiactandailiabilitydirectiveswhatthismeansfortheeuropeanuniontheunitedstatesandbeyond