Net2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks

<p>In an effort to understand the meaning of the intermediate representations captured by deep networks, recent papers have tried to associate specific semantic concepts to individual neural network filter responses, where interesting correlations are often found, largely by focusing on extrem...

Full description

Bibliographic Details
Main Authors: Fong, R, Vedaldi, A
Format: Conference item
Published: Institute of Electrical and Electronics Engineers 2018
_version_ 1797087950526218240
author Fong, R
Vedaldi, A
author_facet Fong, R
Vedaldi, A
author_sort Fong, R
collection OXFORD
description <p>In an effort to understand the meaning of the intermediate representations captured by deep networks, recent papers have tried to associate specific semantic concepts to individual neural network filter responses, where interesting correlations are often found, largely by focusing on extremal filter responses. In this paper, we show that this approach can favor easy-to-interpret cases that are not necessarily representative of the average behavior of a representation.</p> <br/> <p>A more realistic but harder-to-study hypothesis is that semantic representations are distributed, and thus filters must be studied in conjunction. In order to investigate this idea while enabling systematic visualization and quantification of multiple filter responses, we introduce the Net2Vec framework, in which semantic concepts are mapped to vectorial embeddings based on corresponding filter responses. By studying such embeddings, we are able to show that 1., in most cases, multiple filters are required to code for a concept, that 2., often filters are not concept specific and help encode multiple concepts, and that 3., compared to single filter activations, filter embeddings are able to better characterize the meaning of a representation and its relationship to other concepts.</p>
first_indexed 2024-03-07T02:42:53Z
format Conference item
id oxford-uuid:ab0f9aaf-103b-46e1-815a-99969e25d48b
institution University of Oxford
last_indexed 2024-03-07T02:42:53Z
publishDate 2018
publisher Institute of Electrical and Electronics Engineers
record_format dspace
spelling oxford-uuid:ab0f9aaf-103b-46e1-815a-99969e25d48b2022-03-27T03:19:22ZNet2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networksConference itemhttp://purl.org/coar/resource_type/c_5794uuid:ab0f9aaf-103b-46e1-815a-99969e25d48bSymplectic Elements at OxfordInstitute of Electrical and Electronics Engineers2018Fong, RVedaldi, A<p>In an effort to understand the meaning of the intermediate representations captured by deep networks, recent papers have tried to associate specific semantic concepts to individual neural network filter responses, where interesting correlations are often found, largely by focusing on extremal filter responses. In this paper, we show that this approach can favor easy-to-interpret cases that are not necessarily representative of the average behavior of a representation.</p> <br/> <p>A more realistic but harder-to-study hypothesis is that semantic representations are distributed, and thus filters must be studied in conjunction. In order to investigate this idea while enabling systematic visualization and quantification of multiple filter responses, we introduce the Net2Vec framework, in which semantic concepts are mapped to vectorial embeddings based on corresponding filter responses. By studying such embeddings, we are able to show that 1., in most cases, multiple filters are required to code for a concept, that 2., often filters are not concept specific and help encode multiple concepts, and that 3., compared to single filter activations, filter embeddings are able to better characterize the meaning of a representation and its relationship to other concepts.</p>
spellingShingle Fong, R
Vedaldi, A
Net2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks
title Net2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks
title_full Net2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks
title_fullStr Net2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks
title_full_unstemmed Net2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks
title_short Net2Vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks
title_sort net2vec quantifying and explaining how concepts are encoded by filters in deep neural networks
work_keys_str_mv AT fongr net2vecquantifyingandexplaininghowconceptsareencodedbyfiltersindeepneuralnetworks
AT vedaldia net2vecquantifyingandexplaininghowconceptsareencodedbyfiltersindeepneuralnetworks