Autonomised harming

This paper sketches elements of a theory of the ethics of autonomised harming: the phenomenon of delegating decisions about whether and whom to harm to artificial intelligence (AI) in self-driving cars and autonomous weapon systems. First, the paper elucidates the challenge of integrating non-human,...

Full description

Bibliographic Details
Main Author: Eggert, L
Format: Journal article
Language:English
Published: Springer Nature 2023
_version_ 1811140355278503936
author Eggert, L
author_facet Eggert, L
author_sort Eggert, L
collection OXFORD
description This paper sketches elements of a theory of the ethics of autonomised harming: the phenomenon of delegating decisions about whether and whom to harm to artificial intelligence (AI) in self-driving cars and autonomous weapon systems. First, the paper elucidates the challenge of integrating non-human, artificial agents, which lack rights and duties, into our moral framework which relies on precisely these notions to determine the permissibility of harming. Second, the paper examines how potential differences between human agents and non-human, artificial agents might bear on the permissibility of delegating life-and death decisions to AI systems. Third, and finally, the paper explores a series of resulting complexities. These include the challenge of weighing autonomous systems’ promise to reduce harm against the intrinsic value of rectificatory justice as well as the peculiar possibility that delegating harmful acts to AI might render ordinarily impermissible acts permissible. By illuminating what happens when we extend normative theory beyond its traditional boundaries, this discussion offers a starting point for assessing the moral permissibility of delegating consequential decisions to non-human, artificial agents.
first_indexed 2024-03-07T07:59:09Z
format Journal article
id oxford-uuid:83ef9f7b-2600-4a4a-b26f-286d8372902d
institution University of Oxford
language English
last_indexed 2024-09-25T04:20:40Z
publishDate 2023
publisher Springer Nature
record_format dspace
spelling oxford-uuid:83ef9f7b-2600-4a4a-b26f-286d8372902d2024-08-05T10:48:19ZAutonomised harmingJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:83ef9f7b-2600-4a4a-b26f-286d8372902dEnglishSymplectic ElementsSpringer Nature2023Eggert, LThis paper sketches elements of a theory of the ethics of autonomised harming: the phenomenon of delegating decisions about whether and whom to harm to artificial intelligence (AI) in self-driving cars and autonomous weapon systems. First, the paper elucidates the challenge of integrating non-human, artificial agents, which lack rights and duties, into our moral framework which relies on precisely these notions to determine the permissibility of harming. Second, the paper examines how potential differences between human agents and non-human, artificial agents might bear on the permissibility of delegating life-and death decisions to AI systems. Third, and finally, the paper explores a series of resulting complexities. These include the challenge of weighing autonomous systems’ promise to reduce harm against the intrinsic value of rectificatory justice as well as the peculiar possibility that delegating harmful acts to AI might render ordinarily impermissible acts permissible. By illuminating what happens when we extend normative theory beyond its traditional boundaries, this discussion offers a starting point for assessing the moral permissibility of delegating consequential decisions to non-human, artificial agents.
spellingShingle Eggert, L
Autonomised harming
title Autonomised harming
title_full Autonomised harming
title_fullStr Autonomised harming
title_full_unstemmed Autonomised harming
title_short Autonomised harming
title_sort autonomised harming
work_keys_str_mv AT eggertl autonomisedharming