TeLLMe what you see: using LLMs to explain neurons in vision models

As the role of machine learning models continues to expand across diverse fields, the demand for model interpretability grows. This is particularly crucial for deep learning models, which are often referred to as black boxes, due to their highly nonlinear nature. This paper proposes a novel method f...

Full description

Bibliographic Details
Main Author: Guertler, Leon
Other Authors: Luu Anh Tuan
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174298
_version_ 1826123023065808896
author Guertler, Leon
author2 Luu Anh Tuan
author_facet Luu Anh Tuan
Guertler, Leon
author_sort Guertler, Leon
collection NTU
description As the role of machine learning models continues to expand across diverse fields, the demand for model interpretability grows. This is particularly crucial for deep learning models, which are often referred to as black boxes, due to their highly nonlinear nature. This paper proposes a novel method for generating and evaluating concise explanations for the behavior of specific neurons in trained vision models. Doing so signifies an important step towards better understanding the decision making in neural networks. Our technique draws inspiration from a recently published framework that utilized GPT-4 for interpretability of language models. Here, we extend and expand the method to vision models, offering interpretations based on both neuron activations and weights in the network. We illustrate our approach using an AlexNet model and ViT trained on ImageNet, generating clear, human-readable explanations. Our method outperforms the current state-of-the-art in both quantitative and qualitative assessments, while also demonstrating superior capacity in capturing polysemic neuron behavior. The findings hold promise for enhancing transparency, trust and understanding in the deployment of deep learning vision models across various domains.
first_indexed 2024-10-01T05:58:10Z
format Final Year Project (FYP)
id ntu-10356/174298
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:58:10Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1742982024-03-29T15:37:36Z TeLLMe what you see: using LLMs to explain neurons in vision models Guertler, Leon Luu Anh Tuan School of Computer Science and Engineering anhtuan.luu@ntu.edu.sg Computer and Information Science Explainable AI LLM Vision network As the role of machine learning models continues to expand across diverse fields, the demand for model interpretability grows. This is particularly crucial for deep learning models, which are often referred to as black boxes, due to their highly nonlinear nature. This paper proposes a novel method for generating and evaluating concise explanations for the behavior of specific neurons in trained vision models. Doing so signifies an important step towards better understanding the decision making in neural networks. Our technique draws inspiration from a recently published framework that utilized GPT-4 for interpretability of language models. Here, we extend and expand the method to vision models, offering interpretations based on both neuron activations and weights in the network. We illustrate our approach using an AlexNet model and ViT trained on ImageNet, generating clear, human-readable explanations. Our method outperforms the current state-of-the-art in both quantitative and qualitative assessments, while also demonstrating superior capacity in capturing polysemic neuron behavior. The findings hold promise for enhancing transparency, trust and understanding in the deployment of deep learning vision models across various domains. Bachelor's degree 2024-03-26T00:47:45Z 2024-03-26T00:47:45Z 2024 Final Year Project (FYP) Guertler, L. (2024). TeLLMe what you see: using LLMs to explain neurons in vision models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174298 https://hdl.handle.net/10356/174298 en SCSE23-0758 application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Explainable AI
LLM
Vision network
Guertler, Leon
TeLLMe what you see: using LLMs to explain neurons in vision models
title TeLLMe what you see: using LLMs to explain neurons in vision models
title_full TeLLMe what you see: using LLMs to explain neurons in vision models
title_fullStr TeLLMe what you see: using LLMs to explain neurons in vision models
title_full_unstemmed TeLLMe what you see: using LLMs to explain neurons in vision models
title_short TeLLMe what you see: using LLMs to explain neurons in vision models
title_sort tellme what you see using llms to explain neurons in vision models
topic Computer and Information Science
Explainable AI
LLM
Vision network
url https://hdl.handle.net/10356/174298
work_keys_str_mv AT guertlerleon tellmewhatyouseeusingllmstoexplainneuronsinvisionmodels