In humans, we trust
Abstract Algorithms have greatly advanced and become integrated into our everyday lives. Although they support humans in daily functions, they often exhibit unwanted behaviors perpetuating social stereotypes, discrimination, and other forms of biases. Regardless of their accuracy on task, many algor...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2023-12-01
|
Series: | Discover Artificial Intelligence |
Subjects: | |
Online Access: | https://doi.org/10.1007/s44163-023-00092-2 |
_version_ | 1797388207365554176 |
---|---|
author | Kyriakos Kyriakou Jahna Otterbacher |
author_facet | Kyriakos Kyriakou Jahna Otterbacher |
author_sort | Kyriakos Kyriakou |
collection | DOAJ |
description | Abstract Algorithms have greatly advanced and become integrated into our everyday lives. Although they support humans in daily functions, they often exhibit unwanted behaviors perpetuating social stereotypes, discrimination, and other forms of biases. Regardless of their accuracy on task, many algorithms do not get scrutinized for unintended behaviors in a systematic way. This phenomenon can propagate and amplify existing societal issues or even create new ones. Many have called for human supervision (human oversight) of algorithmic processes. Oversight is often presented as a way of monitoring algorithmic behavior, as to then address identified issues, by initiating a fix or even correcting the final decision. Unfortunately, a common consensus is missing in the scientific community as to what all human oversight entails. Most importantly, the requirements for a successful application of a human oversight process are only vaguely defined. To address this, we present a critical synthesis of five key articles from different domains, which discuss requirements for human oversight. We use the concept of the Society-in-the-Loop (SITL) [1] as the baseline for understanding and mapping these requirements. In addition, we comment on the requirements and the overall multidisciplinary trend around the topic. Then, we present the concept of a Modular Oversight Methodology (MOM) following the SITL viewpoint, by also considering the requirements identified from the selected literature. Finally, we present a set of suggestions and future work required for a successful application of a human oversight process in a SITL approach. |
first_indexed | 2024-03-08T22:36:29Z |
format | Article |
id | doaj.art-7b7675b0fb93475ca84c9b1dbd172a64 |
institution | Directory Open Access Journal |
issn | 2731-0809 |
language | English |
last_indexed | 2024-03-08T22:36:29Z |
publishDate | 2023-12-01 |
publisher | Springer |
record_format | Article |
series | Discover Artificial Intelligence |
spelling | doaj.art-7b7675b0fb93475ca84c9b1dbd172a642023-12-17T12:24:25ZengSpringerDiscover Artificial Intelligence2731-08092023-12-013111810.1007/s44163-023-00092-2In humans, we trustKyriakos Kyriakou0Jahna Otterbacher1Fairness and Ethics in AI-Human Interaction (fAIre MRG), CYENS Centre of ExcellenceFairness and Ethics in AI-Human Interaction (fAIre MRG), CYENS Centre of ExcellenceAbstract Algorithms have greatly advanced and become integrated into our everyday lives. Although they support humans in daily functions, they often exhibit unwanted behaviors perpetuating social stereotypes, discrimination, and other forms of biases. Regardless of their accuracy on task, many algorithms do not get scrutinized for unintended behaviors in a systematic way. This phenomenon can propagate and amplify existing societal issues or even create new ones. Many have called for human supervision (human oversight) of algorithmic processes. Oversight is often presented as a way of monitoring algorithmic behavior, as to then address identified issues, by initiating a fix or even correcting the final decision. Unfortunately, a common consensus is missing in the scientific community as to what all human oversight entails. Most importantly, the requirements for a successful application of a human oversight process are only vaguely defined. To address this, we present a critical synthesis of five key articles from different domains, which discuss requirements for human oversight. We use the concept of the Society-in-the-Loop (SITL) [1] as the baseline for understanding and mapping these requirements. In addition, we comment on the requirements and the overall multidisciplinary trend around the topic. Then, we present the concept of a Modular Oversight Methodology (MOM) following the SITL viewpoint, by also considering the requirements identified from the selected literature. Finally, we present a set of suggestions and future work required for a successful application of a human oversight process in a SITL approach.https://doi.org/10.1007/s44163-023-00092-2Human oversightAlgorithmic fairnessAlgorithmic accountabilityAI ethicsSociety-in-the-loop |
spellingShingle | Kyriakos Kyriakou Jahna Otterbacher In humans, we trust Discover Artificial Intelligence Human oversight Algorithmic fairness Algorithmic accountability AI ethics Society-in-the-loop |
title | In humans, we trust |
title_full | In humans, we trust |
title_fullStr | In humans, we trust |
title_full_unstemmed | In humans, we trust |
title_short | In humans, we trust |
title_sort | in humans we trust |
topic | Human oversight Algorithmic fairness Algorithmic accountability AI ethics Society-in-the-loop |
url | https://doi.org/10.1007/s44163-023-00092-2 |
work_keys_str_mv | AT kyriakoskyriakou inhumanswetrust AT jahnaotterbacher inhumanswetrust |