How AI developers can assure algorithmic fairness

Abstract Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, i...

Full description

Bibliographic Details
Main Authors: Khensani Xivuri, Hosanna Twinomurinzi
Format: Article
Language:English
Published: Springer 2023-07-01
Series:Discover Artificial Intelligence
Subjects:
Online Access:https://doi.org/10.1007/s44163-023-00074-4
_version_ 1797774117166907392
author Khensani Xivuri
Hosanna Twinomurinzi
author_facet Khensani Xivuri
Hosanna Twinomurinzi
author_sort Khensani Xivuri
collection DOAJ
description Abstract Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team.
first_indexed 2024-03-12T22:15:23Z
format Article
id doaj.art-e1b70366459847cd87e2a36ae64c6570
institution Directory Open Access Journal
issn 2731-0809
language English
last_indexed 2024-03-12T22:15:23Z
publishDate 2023-07-01
publisher Springer
record_format Article
series Discover Artificial Intelligence
spelling doaj.art-e1b70366459847cd87e2a36ae64c65702023-07-23T11:20:27ZengSpringerDiscover Artificial Intelligence2731-08092023-07-013112110.1007/s44163-023-00074-4How AI developers can assure algorithmic fairnessKhensani Xivuri0Hosanna Twinomurinzi1Centre for Applied Data Science, University of JohannesburgCentre for Applied Data Science, University of JohannesburgAbstract Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team.https://doi.org/10.1007/s44163-023-00074-4Artificial intelligence (AI)FairnessAlgorithmsJurgen HabermasDomination-free development environmentProcess model
spellingShingle Khensani Xivuri
Hosanna Twinomurinzi
How AI developers can assure algorithmic fairness
Discover Artificial Intelligence
Artificial intelligence (AI)
Fairness
Algorithms
Jurgen Habermas
Domination-free development environment
Process model
title How AI developers can assure algorithmic fairness
title_full How AI developers can assure algorithmic fairness
title_fullStr How AI developers can assure algorithmic fairness
title_full_unstemmed How AI developers can assure algorithmic fairness
title_short How AI developers can assure algorithmic fairness
title_sort how ai developers can assure algorithmic fairness
topic Artificial intelligence (AI)
Fairness
Algorithms
Jurgen Habermas
Domination-free development environment
Process model
url https://doi.org/10.1007/s44163-023-00074-4
work_keys_str_mv AT khensanixivuri howaideveloperscanassurealgorithmicfairness
AT hosannatwinomurinzi howaideveloperscanassurealgorithmicfairness