Methodology and tools for designing ethical artificial intelligence systems

As artificial intelligence (AI) systems become increasingly ubiquitous, there is a need to steer the design and development of such systems in an ethical trajectory. The technological intricacies of advanced AI systems (e.g., self driving cars) is warranting a deeper look into ethical algorithmic de...

Volledige beschrijving

Bibliografische gegevens
Hoofdauteur: Zhang, Jiehuang
Andere auteurs: Yu Han
Formaat: Thesis-Doctor of Philosophy
Taal:English
Gepubliceerd in: Nanyang Technological University 2023
Onderwerpen:
Online toegang:https://hdl.handle.net/10356/169362
_version_ 1826127525506449408
author Zhang, Jiehuang
author2 Yu Han
author_facet Yu Han
Zhang, Jiehuang
author_sort Zhang, Jiehuang
collection NTU
description As artificial intelligence (AI) systems become increasingly ubiquitous, there is a need to steer the design and development of such systems in an ethical trajectory. The technological intricacies of advanced AI systems (e.g., self driving cars) is warranting a deeper look into ethical algorithmic decision making. However, existing AI software design and development teams generally lack understanding in the concepts involved in ethical AI, and face a lack of easy-to-use tools to help them incorporate ethical considerations into the AI systems being developed. This thesis aims to address this important gap. Firstly, even though AI systems produces logical decisions, biases and discrimination are able to creep into the data and model to affect outcomes causing harm. This inspired us to re-evaluate the design metrics for creating such systems and focus more on integrating human values in the system. However, while the awareness of the need for ethical AI systems is high, there are currently limited methodologies for designers and engineers to incorporate human values into their designs. The proposed methodological tool aims to address this gap by assisting product teams to surface fairness concerns, navigate complex ethical choices around fairness, and overcome blind spots and team biases. It can also help them to stimulate perspective thinking from multiple parties and stakeholders. With our tool, we aim to lower the bar to add fairness to the design discussion so that more design teams can make better and more informed decisions for fairness in their application scenarios. We then extended the methodology to the field of explainable AI (XAI). The development of AI systems has created many applications that have tremendous current and future value to human society. However as AI systems penetrate more aspects of everyday life, there is a pressing need to explain their decision making processes in order to build trust and familiarity with the end users. In selected fields like healthcare and self driving cars, there are high stakes that require AI to achieve a minimum standard for accuracy and provide well designed explanations for their outputs, especially when it impacts human life. To date, many techniques have been developed to make algorithms more explainable in human terms, however, there are no design methodologies to allow software teams to systematically surface and address explainability related issues during the AI design and conception stage. We proposed the Explainability in Design (EID) methodological framework for addressing explainability problems in AI systems. EID is a step by step guide to the AI design process that has been refined over a series of user studies, and interviews with experts in AI explainability. It is designed to be used by software design teams to uncover and resolve potential issues in their AI products, as well as being able to simply refine and explore the explainability of their products and systems. Through empirical studies involving AI system designers, it has been shown to decrease the barrier of entry, the time and experience needed to effectively make well-informed decisions for integrating explainability into AI solutions.
first_indexed 2024-10-01T07:10:01Z
format Thesis-Doctor of Philosophy
id ntu-10356/169362
institution Nanyang Technological University
language English
last_indexed 2024-10-01T07:10:01Z
publishDate 2023
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1693622023-10-31T07:36:55Z Methodology and tools for designing ethical artificial intelligence systems Zhang, Jiehuang Yu Han School of Computer Science and Engineering Alibaba-NTU Joint Research Institute han.yu@ntu.edu.sg Engineering::Computer science and engineering As artificial intelligence (AI) systems become increasingly ubiquitous, there is a need to steer the design and development of such systems in an ethical trajectory. The technological intricacies of advanced AI systems (e.g., self driving cars) is warranting a deeper look into ethical algorithmic decision making. However, existing AI software design and development teams generally lack understanding in the concepts involved in ethical AI, and face a lack of easy-to-use tools to help them incorporate ethical considerations into the AI systems being developed. This thesis aims to address this important gap. Firstly, even though AI systems produces logical decisions, biases and discrimination are able to creep into the data and model to affect outcomes causing harm. This inspired us to re-evaluate the design metrics for creating such systems and focus more on integrating human values in the system. However, while the awareness of the need for ethical AI systems is high, there are currently limited methodologies for designers and engineers to incorporate human values into their designs. The proposed methodological tool aims to address this gap by assisting product teams to surface fairness concerns, navigate complex ethical choices around fairness, and overcome blind spots and team biases. It can also help them to stimulate perspective thinking from multiple parties and stakeholders. With our tool, we aim to lower the bar to add fairness to the design discussion so that more design teams can make better and more informed decisions for fairness in their application scenarios. We then extended the methodology to the field of explainable AI (XAI). The development of AI systems has created many applications that have tremendous current and future value to human society. However as AI systems penetrate more aspects of everyday life, there is a pressing need to explain their decision making processes in order to build trust and familiarity with the end users. In selected fields like healthcare and self driving cars, there are high stakes that require AI to achieve a minimum standard for accuracy and provide well designed explanations for their outputs, especially when it impacts human life. To date, many techniques have been developed to make algorithms more explainable in human terms, however, there are no design methodologies to allow software teams to systematically surface and address explainability related issues during the AI design and conception stage. We proposed the Explainability in Design (EID) methodological framework for addressing explainability problems in AI systems. EID is a step by step guide to the AI design process that has been refined over a series of user studies, and interviews with experts in AI explainability. It is designed to be used by software design teams to uncover and resolve potential issues in their AI products, as well as being able to simply refine and explore the explainability of their products and systems. Through empirical studies involving AI system designers, it has been shown to decrease the barrier of entry, the time and experience needed to effectively make well-informed decisions for integrating explainability into AI solutions. Doctor of Philosophy 2023-07-20T01:10:44Z 2023-07-20T01:10:44Z 2023 Thesis-Doctor of Philosophy Zhang, J. (2023). Methodology and tools for designing ethical artificial intelligence systems. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/169362 https://hdl.handle.net/10356/169362 10.32657/10356/169362 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
spellingShingle Engineering::Computer science and engineering
Zhang, Jiehuang
Methodology and tools for designing ethical artificial intelligence systems
title Methodology and tools for designing ethical artificial intelligence systems
title_full Methodology and tools for designing ethical artificial intelligence systems
title_fullStr Methodology and tools for designing ethical artificial intelligence systems
title_full_unstemmed Methodology and tools for designing ethical artificial intelligence systems
title_short Methodology and tools for designing ethical artificial intelligence systems
title_sort methodology and tools for designing ethical artificial intelligence systems
topic Engineering::Computer science and engineering
url https://hdl.handle.net/10356/169362
work_keys_str_mv AT zhangjiehuang methodologyandtoolsfordesigningethicalartificialintelligencesystems