Measuring the interpretability of unsupervised representations via quantized reversed probing

Self-supervised visual representation learning has recently attracted significant research interest. While a common way to evaluate self-supervised representations is through transfer to various downstream tasks, we instead investigate the problem of measuring their interpretability, i.e. understand...

Full description

Bibliographic Details
Main Authors: Laina, I, Asano, YM, Vedaldi, A
Format: Conference item
Language:English
Published: OpenReview 2021
_version_ 1826307688704770048
author Laina, I
Asano, YM
Vedaldi, A
author_facet Laina, I
Asano, YM
Vedaldi, A
author_sort Laina, I
collection OXFORD
description Self-supervised visual representation learning has recently attracted significant research interest. While a common way to evaluate self-supervised representations is through transfer to various downstream tasks, we instead investigate the problem of measuring their interpretability, i.e. understanding the semantics encoded in raw representations. We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts. To quantify this we introduce a decoding bottleneck: information must be captured by simple predictors, mapping concepts to clusters in representation space. This approach, which we call reverse linear probing, provides a single number sensitive to the semanticity of the representation. This measure is also able to detect when the representation contains combinations of concepts (e.g., "red apple'') instead of just individual attributes ("red'' and "apple'' independently). Finally, we propose to use supervised classifiers to automatically label large datasets in order to enrich the space of concepts used for probing. We use our method to evaluate a large number of self-supervised representations, ranking them by interpretability, highlight the differences that emerge compared to the standard evaluation with linear probes and discuss several qualitative insights. Code at: https://github.com/iro-cp/ssl-qrp.
first_indexed 2024-03-07T07:06:52Z
format Conference item
id oxford-uuid:dfdcf107-438a-4779-bd0c-e228db70bc61
institution University of Oxford
language English
last_indexed 2024-03-07T07:06:52Z
publishDate 2021
publisher OpenReview
record_format dspace
spelling oxford-uuid:dfdcf107-438a-4779-bd0c-e228db70bc612022-05-13T11:09:11ZMeasuring the interpretability of unsupervised representations via quantized reversed probingConference itemhttp://purl.org/coar/resource_type/c_5794uuid:dfdcf107-438a-4779-bd0c-e228db70bc61EnglishSymplectic ElementsOpenReview2021Laina, IAsano, YMVedaldi, ASelf-supervised visual representation learning has recently attracted significant research interest. While a common way to evaluate self-supervised representations is through transfer to various downstream tasks, we instead investigate the problem of measuring their interpretability, i.e. understanding the semantics encoded in raw representations. We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts. To quantify this we introduce a decoding bottleneck: information must be captured by simple predictors, mapping concepts to clusters in representation space. This approach, which we call reverse linear probing, provides a single number sensitive to the semanticity of the representation. This measure is also able to detect when the representation contains combinations of concepts (e.g., "red apple'') instead of just individual attributes ("red'' and "apple'' independently). Finally, we propose to use supervised classifiers to automatically label large datasets in order to enrich the space of concepts used for probing. We use our method to evaluate a large number of self-supervised representations, ranking them by interpretability, highlight the differences that emerge compared to the standard evaluation with linear probes and discuss several qualitative insights. Code at: https://github.com/iro-cp/ssl-qrp.
spellingShingle Laina, I
Asano, YM
Vedaldi, A
Measuring the interpretability of unsupervised representations via quantized reversed probing
title Measuring the interpretability of unsupervised representations via quantized reversed probing
title_full Measuring the interpretability of unsupervised representations via quantized reversed probing
title_fullStr Measuring the interpretability of unsupervised representations via quantized reversed probing
title_full_unstemmed Measuring the interpretability of unsupervised representations via quantized reversed probing
title_short Measuring the interpretability of unsupervised representations via quantized reversed probing
title_sort measuring the interpretability of unsupervised representations via quantized reversed probing
work_keys_str_mv AT lainai measuringtheinterpretabilityofunsupervisedrepresentationsviaquantizedreversedprobing
AT asanoym measuringtheinterpretabilityofunsupervisedrepresentationsviaquantizedreversedprobing
AT vedaldia measuringtheinterpretabilityofunsupervisedrepresentationsviaquantizedreversedprobing