Quantifying learnability and describability of visual concepts emerging in representation learning
The increasing impact of black box models, and particularly of unsupervised ones, comes with an increasing interest in tools to understand and interpret them. In this paper, we consider in particular how to characterise visual groupings discovered automatically by deep neural networks, starting with...
Main Authors: | , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
NeurIPS
2020
|
_version_ | 1826294064622862336 |
---|---|
author | Laina, I Fong, RC Vedaldi, A |
author_facet | Laina, I Fong, RC Vedaldi, A |
author_sort | Laina, I |
collection | OXFORD |
description | The increasing impact of black box models, and particularly of unsupervised ones, comes with an increasing interest in tools to understand and interpret them. In this paper, we consider in particular how to characterise visual groupings discovered automatically by deep neural networks, starting with state-of-the-art clustering methods. In some cases, clusters readily correspond to an existing labelled dataset. However, often they do not, yet they still maintain an "intuitive interpretability''. We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings, including unsupervised ones. The idea is to measure (1) how well humans can learn to reproduce a grouping by measuring their ability to generalise from a small set of visual examples (learnability) and (2) whether the set of visual examples can be replaced by a succinct, textual description (describability). By assessing human annotators as classifiers, we remove the subjective quality of existing evaluation metrics. For better scalability, we finally propose a class-level captioning system to generate descriptions for visual groupings automatically and compare it to human annotators using the describability metric. |
first_indexed | 2024-03-07T03:39:52Z |
format | Conference item |
id | oxford-uuid:bd8630a9-a4ff-4e21-9d02-7cbe47455042 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T03:39:52Z |
publishDate | 2020 |
publisher | NeurIPS |
record_format | dspace |
spelling | oxford-uuid:bd8630a9-a4ff-4e21-9d02-7cbe474550422022-03-27T05:32:28ZQuantifying learnability and describability of visual concepts emerging in representation learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:bd8630a9-a4ff-4e21-9d02-7cbe47455042EnglishSymplectic ElementsNeurIPS2020Laina, IFong, RCVedaldi, AThe increasing impact of black box models, and particularly of unsupervised ones, comes with an increasing interest in tools to understand and interpret them. In this paper, we consider in particular how to characterise visual groupings discovered automatically by deep neural networks, starting with state-of-the-art clustering methods. In some cases, clusters readily correspond to an existing labelled dataset. However, often they do not, yet they still maintain an "intuitive interpretability''. We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings, including unsupervised ones. The idea is to measure (1) how well humans can learn to reproduce a grouping by measuring their ability to generalise from a small set of visual examples (learnability) and (2) whether the set of visual examples can be replaced by a succinct, textual description (describability). By assessing human annotators as classifiers, we remove the subjective quality of existing evaluation metrics. For better scalability, we finally propose a class-level captioning system to generate descriptions for visual groupings automatically and compare it to human annotators using the describability metric. |
spellingShingle | Laina, I Fong, RC Vedaldi, A Quantifying learnability and describability of visual concepts emerging in representation learning |
title | Quantifying learnability and describability of visual concepts emerging in representation learning |
title_full | Quantifying learnability and describability of visual concepts emerging in representation learning |
title_fullStr | Quantifying learnability and describability of visual concepts emerging in representation learning |
title_full_unstemmed | Quantifying learnability and describability of visual concepts emerging in representation learning |
title_short | Quantifying learnability and describability of visual concepts emerging in representation learning |
title_sort | quantifying learnability and describability of visual concepts emerging in representation learning |
work_keys_str_mv | AT lainai quantifyinglearnabilityanddescribabilityofvisualconceptsemerginginrepresentationlearning AT fongrc quantifyinglearnabilityanddescribabilityofvisualconceptsemerginginrepresentationlearning AT vedaldia quantifyinglearnabilityanddescribabilityofvisualconceptsemerginginrepresentationlearning |