An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating Risk
Artificial intelligence in military operations comes in two kinds. First, there is narrow or specific intelligence – the autonomous ability to identify an instance of a species of target, and to track its changes of position. Second, there is broad or general intelligence – the autonomous ability t...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
The NKUA Applied Philosophy Research Laboratory
2023-12-01
|
Series: | Conatus - Journal of Philosophy |
Subjects: | |
Online Access: | https://ejournals.epublishing.ekt.gr/index.php/Conatus/article/view/34666 |
_version_ | 1797369684677361664 |
---|---|
author | Nigel Biggar |
author_facet | Nigel Biggar |
author_sort | Nigel Biggar |
collection | DOAJ |
description |
Artificial intelligence in military operations comes in two kinds. First, there is narrow or specific intelligence – the autonomous ability to identify an instance of a species of target, and to track its changes of position. Second, there is broad or general intelligence – the autonomous ability to choose a species of target, identify instances, track their movements, decide when to strike them, learn from errors, and improve initial choices. These two kinds of artificial intelligence raise ethical questions mainly because of two features: the physical distance they put between the human agents deploying them and their targets, and their ability to act independently of those agents. The main ethical questions these features raise are three. First, how to maintain the traditional martial virtues of fortitude and chivalry while operating lethal weapons at a safe distance? Second, how much autonomy to grant a machine? And third, what risks to take with the possibility of technical error? This paper considers each of these questions in turn.
|
first_indexed | 2024-03-08T17:50:25Z |
format | Article |
id | doaj.art-bef7835fa0164fb6ac1ff8677894ffc5 |
institution | Directory Open Access Journal |
issn | 2653-9373 2459-3842 |
language | English |
last_indexed | 2024-03-08T17:50:25Z |
publishDate | 2023-12-01 |
publisher | The NKUA Applied Philosophy Research Laboratory |
record_format | Article |
series | Conatus - Journal of Philosophy |
spelling | doaj.art-bef7835fa0164fb6ac1ff8677894ffc52024-01-02T08:49:01ZengThe NKUA Applied Philosophy Research LaboratoryConatus - Journal of Philosophy2653-93732459-38422023-12-018210.12681/cjp.34666An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating RiskNigel Biggar0University of Oxford, United Kingdom Artificial intelligence in military operations comes in two kinds. First, there is narrow or specific intelligence – the autonomous ability to identify an instance of a species of target, and to track its changes of position. Second, there is broad or general intelligence – the autonomous ability to choose a species of target, identify instances, track their movements, decide when to strike them, learn from errors, and improve initial choices. These two kinds of artificial intelligence raise ethical questions mainly because of two features: the physical distance they put between the human agents deploying them and their targets, and their ability to act independently of those agents. The main ethical questions these features raise are three. First, how to maintain the traditional martial virtues of fortitude and chivalry while operating lethal weapons at a safe distance? Second, how much autonomy to grant a machine? And third, what risks to take with the possibility of technical error? This paper considers each of these questions in turn. https://ejournals.epublishing.ekt.gr/index.php/Conatus/article/view/34666artificial intelligencewarweaponryethicsvirtuesautonomy |
spellingShingle | Nigel Biggar An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating Risk Conatus - Journal of Philosophy artificial intelligence war weaponry ethics virtues autonomy |
title | An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating Risk |
title_full | An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating Risk |
title_fullStr | An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating Risk |
title_full_unstemmed | An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating Risk |
title_short | An Ethic of Military Uses of Artificial Intelligence: Sustaining Virtue, Granting Autonomy, and Calibrating Risk |
title_sort | ethic of military uses of artificial intelligence sustaining virtue granting autonomy and calibrating risk |
topic | artificial intelligence war weaponry ethics virtues autonomy |
url | https://ejournals.epublishing.ekt.gr/index.php/Conatus/article/view/34666 |
work_keys_str_mv | AT nigelbiggar anethicofmilitaryusesofartificialintelligencesustainingvirtuegrantingautonomyandcalibratingrisk AT nigelbiggar ethicofmilitaryusesofartificialintelligencesustainingvirtuegrantingautonomyandcalibratingrisk |