Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion

As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One such high-level principl...

Full description

Bibliographic Details
Main Authors: Eryn Rigley, Adriane Chapman, Christine Evers, Will McNeill
Format: Article
Language:English
Published: MDPI AG 2023-10-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/4/4/43
_version_ 1797382338248704000
author Eryn Rigley
Adriane Chapman
Christine Evers
Will McNeill
author_facet Eryn Rigley
Adriane Chapman
Christine Evers
Will McNeill
author_sort Eryn Rigley
collection DOAJ
description As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One such high-level principle that is common across the AI landscape is ‘human-centredness’, though oftentimes it is applied without due investigation into its merits and limitations and without a clear, common definition. This paper undertakes a scoping review of AI ethics standards to examine the commitment to ‘human-centredness’ and how this commitment interacts with other ethical concerns, namely, concerns for nonhumans animals and environmental wellbeing. We found that human-centred AI ethics standards tend to prioritise humans over nonhumans more so than nonhuman-centred standards. A critical analysis of our findings suggests that a commitment to human-centredness within AI ethics standards accords with the definition of anthropocentrism in moral philosophy: that humans have, at least, more intrinsic moral value than nonhumans. We consider some of the limitations of anthropocentric AI ethics, which include permitting harm to the environment and animals and undermining the stability of ecosystems.
first_indexed 2024-03-08T21:03:58Z
format Article
id doaj.art-ca84cff2d4654c13a085b5dd544f85b3
institution Directory Open Access Journal
issn 2673-2688
language English
last_indexed 2024-03-08T21:03:58Z
publishDate 2023-10-01
publisher MDPI AG
record_format Article
series AI
spelling doaj.art-ca84cff2d4654c13a085b5dd544f85b32023-12-22T13:46:56ZengMDPI AGAI2673-26882023-10-014484487410.3390/ai4040043Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and DiscussionEryn Rigley0Adriane Chapman1Christine Evers2Will McNeill3School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BF, UKSchool of Electronics and Computer Science, University of Southampton, Southampton SO17 1BF, UKSchool of Electronics and Computer Science, University of Southampton, Southampton SO17 1BF, UKSchool of Electronics and Computer Science, University of Southampton, Southampton SO17 1BF, UKAs AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One such high-level principle that is common across the AI landscape is ‘human-centredness’, though oftentimes it is applied without due investigation into its merits and limitations and without a clear, common definition. This paper undertakes a scoping review of AI ethics standards to examine the commitment to ‘human-centredness’ and how this commitment interacts with other ethical concerns, namely, concerns for nonhumans animals and environmental wellbeing. We found that human-centred AI ethics standards tend to prioritise humans over nonhumans more so than nonhuman-centred standards. A critical analysis of our findings suggests that a commitment to human-centredness within AI ethics standards accords with the definition of anthropocentrism in moral philosophy: that humans have, at least, more intrinsic moral value than nonhumans. We consider some of the limitations of anthropocentric AI ethics, which include permitting harm to the environment and animals and undermining the stability of ecosystems.https://www.mdpi.com/2673-2688/4/4/43AI ethicshuman-centred AIenvironmental ethicsscoping reviewAI ethics standardsanthropocentrism
spellingShingle Eryn Rigley
Adriane Chapman
Christine Evers
Will McNeill
Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
AI
AI ethics
human-centred AI
environmental ethics
scoping review
AI ethics standards
anthropocentrism
title Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
title_full Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
title_fullStr Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
title_full_unstemmed Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
title_short Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
title_sort anthropocentrism and environmental wellbeing in ai ethics standards a scoping review and discussion
topic AI ethics
human-centred AI
environmental ethics
scoping review
AI ethics standards
anthropocentrism
url https://www.mdpi.com/2673-2688/4/4/43
work_keys_str_mv AT erynrigley anthropocentrismandenvironmentalwellbeinginaiethicsstandardsascopingreviewanddiscussion
AT adrianechapman anthropocentrismandenvironmentalwellbeinginaiethicsstandardsascopingreviewanddiscussion
AT christineevers anthropocentrismandenvironmentalwellbeinginaiethicsstandardsascopingreviewanddiscussion
AT willmcneill anthropocentrismandenvironmentalwellbeinginaiethicsstandardsascopingreviewanddiscussion