A novel application of XAI in squinting models: A position paper
Artificial Intelligence, and Machine Learning especially, are becoming increasingly foundational to our collective future. Recent developments around generative models such as ChatGPT, and DALL-E represent just the tip of the iceberg in new gadgets that will change the way we live our lives. Convolu...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2023-09-01
|
Series: | Machine Learning with Applications |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2666827023000440 |
_version_ | 1827860197605900288 |
---|---|
author | Kenneth Wenger Katayoun Hossein Abadi Damian Fozard Kayvan Tirdad Alex Dela Cruz Alireza Sadeghian |
author_facet | Kenneth Wenger Katayoun Hossein Abadi Damian Fozard Kayvan Tirdad Alex Dela Cruz Alireza Sadeghian |
author_sort | Kenneth Wenger |
collection | DOAJ |
description | Artificial Intelligence, and Machine Learning especially, are becoming increasingly foundational to our collective future. Recent developments around generative models such as ChatGPT, and DALL-E represent just the tip of the iceberg in new gadgets that will change the way we live our lives. Convolutional Neural Networks (CNNs) and Transformer models are at the heart of advancements in the autonomous vehicles and health care industries as well. Yet these models, as impressive as they are, still make plenty of mistakes without justifying or explaining what aspects of the input or internal state, was responsible for the error. Often, the goal of automation is to increase throughput, processing as many tasks as possible in a short a period of time. For some use cases the cost of mistakes might be acceptable as long as production is increased above some set margin. However, in health care, autonomous vehicles, and financial applications, the cost of a mistake might have catastrophic consequences. For this reason, industries where single mistakes can be costly are less enthusiastic about early AI adoption. The field of eXplainable AI (XAI) has attracted significant attention in recent years with the goal of producing algorithms that shed light into the decision-making process of neural networks. In this paper we show how robust vision pipelines can be built using XAI algorithms with the goal of producing automated watchdogs that actively monitor the decision-making process of neural networks for signs of mistakes or ambiguous data. We call these robust vision pipelines, squinting pipelines. |
first_indexed | 2024-03-12T13:19:48Z |
format | Article |
id | doaj.art-d57f35a2bc554ee483e7b39c67cdbdbe |
institution | Directory Open Access Journal |
issn | 2666-8270 |
language | English |
last_indexed | 2024-03-12T13:19:48Z |
publishDate | 2023-09-01 |
publisher | Elsevier |
record_format | Article |
series | Machine Learning with Applications |
spelling | doaj.art-d57f35a2bc554ee483e7b39c67cdbdbe2023-08-26T04:44:19ZengElsevierMachine Learning with Applications2666-82702023-09-0113100491A novel application of XAI in squinting models: A position paperKenneth Wenger0Katayoun Hossein Abadi1Damian Fozard2Kayvan Tirdad3Alex Dela Cruz4Alireza Sadeghian5Research Department, Advanced Artificial Intelligence & Cognition, Squint Inc, Waterloo, Ontario, Canada; Department of Computer Science, Faculty of Science, Toronto Metropolitan University, Toronto, Ontario, Canada; Corresponding author at: Department of Computer Science, Faculty of Science, Toronto Metropolitan University, Toronto, Ontario, Canada.Research Department, Advanced Artificial Intelligence & Cognition, Squint Inc, Waterloo, Ontario, CanadaResearch Department, Advanced Artificial Intelligence & Cognition, Squint Inc, Waterloo, Ontario, CanadaDepartment of Computer Science, Faculty of Science, Toronto Metropolitan University, Toronto, Ontario, CanadaDepartment of Computer Science, Faculty of Science, Toronto Metropolitan University, Toronto, Ontario, CanadaDepartment of Computer Science, Faculty of Science, Toronto Metropolitan University, Toronto, Ontario, CanadaArtificial Intelligence, and Machine Learning especially, are becoming increasingly foundational to our collective future. Recent developments around generative models such as ChatGPT, and DALL-E represent just the tip of the iceberg in new gadgets that will change the way we live our lives. Convolutional Neural Networks (CNNs) and Transformer models are at the heart of advancements in the autonomous vehicles and health care industries as well. Yet these models, as impressive as they are, still make plenty of mistakes without justifying or explaining what aspects of the input or internal state, was responsible for the error. Often, the goal of automation is to increase throughput, processing as many tasks as possible in a short a period of time. For some use cases the cost of mistakes might be acceptable as long as production is increased above some set margin. However, in health care, autonomous vehicles, and financial applications, the cost of a mistake might have catastrophic consequences. For this reason, industries where single mistakes can be costly are less enthusiastic about early AI adoption. The field of eXplainable AI (XAI) has attracted significant attention in recent years with the goal of producing algorithms that shed light into the decision-making process of neural networks. In this paper we show how robust vision pipelines can be built using XAI algorithms with the goal of producing automated watchdogs that actively monitor the decision-making process of neural networks for signs of mistakes or ambiguous data. We call these robust vision pipelines, squinting pipelines.http://www.sciencedirect.com/science/article/pii/S2666827023000440Artificial IntelligenceDeep learningPathologyExplainable AIXAISafety critical AI |
spellingShingle | Kenneth Wenger Katayoun Hossein Abadi Damian Fozard Kayvan Tirdad Alex Dela Cruz Alireza Sadeghian A novel application of XAI in squinting models: A position paper Machine Learning with Applications Artificial Intelligence Deep learning Pathology Explainable AI XAI Safety critical AI |
title | A novel application of XAI in squinting models: A position paper |
title_full | A novel application of XAI in squinting models: A position paper |
title_fullStr | A novel application of XAI in squinting models: A position paper |
title_full_unstemmed | A novel application of XAI in squinting models: A position paper |
title_short | A novel application of XAI in squinting models: A position paper |
title_sort | novel application of xai in squinting models a position paper |
topic | Artificial Intelligence Deep learning Pathology Explainable AI XAI Safety critical AI |
url | http://www.sciencedirect.com/science/article/pii/S2666827023000440 |
work_keys_str_mv | AT kennethwenger anovelapplicationofxaiinsquintingmodelsapositionpaper AT katayounhosseinabadi anovelapplicationofxaiinsquintingmodelsapositionpaper AT damianfozard anovelapplicationofxaiinsquintingmodelsapositionpaper AT kayvantirdad anovelapplicationofxaiinsquintingmodelsapositionpaper AT alexdelacruz anovelapplicationofxaiinsquintingmodelsapositionpaper AT alirezasadeghian anovelapplicationofxaiinsquintingmodelsapositionpaper AT kennethwenger novelapplicationofxaiinsquintingmodelsapositionpaper AT katayounhosseinabadi novelapplicationofxaiinsquintingmodelsapositionpaper AT damianfozard novelapplicationofxaiinsquintingmodelsapositionpaper AT kayvantirdad novelapplicationofxaiinsquintingmodelsapositionpaper AT alexdelacruz novelapplicationofxaiinsquintingmodelsapositionpaper AT alirezasadeghian novelapplicationofxaiinsquintingmodelsapositionpaper |