Natural Language Descriptions of Deep Visual Features

Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques a...

Full description

Bibliographic Details
Main Author: Hernandez, Evan
Other Authors: Andreas, Jacob
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/143251
_version_ 1811069898517905408
author Hernandez, Evan
author2 Andreas, Jacob
author_facet Andreas, Jacob
Hernandez, Evan
author_sort Hernandez, Evan
collection MIT
description Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual-information guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with human-generated feature descriptions across a diverse set of model architectures and tasks, and can aid in understanding and controlling learned models. We highlight three applications of natural language neuron descriptions. First, we use MILAN for analysis, characterizing the distribution and importance of neurons selective for attribute, category, and relational information in vision models. Second, we use MILAN for auditing, surfacing neurons sensitive to protected categories like race and gender in models trained on datasets intended to obscure these features. Finally, we use MILAN for editing, improving robustness in an image classifier by deleting neurons sensitive to text features spuriously correlated with class labels.
first_indexed 2024-09-23T08:18:43Z
format Thesis
id mit-1721.1/143251
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T08:18:43Z
publishDate 2022
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1432512022-06-16T03:56:34Z Natural Language Descriptions of Deep Visual Features Hernandez, Evan Andreas, Jacob Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual-information guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with human-generated feature descriptions across a diverse set of model architectures and tasks, and can aid in understanding and controlling learned models. We highlight three applications of natural language neuron descriptions. First, we use MILAN for analysis, characterizing the distribution and importance of neurons selective for attribute, category, and relational information in vision models. Second, we use MILAN for auditing, surfacing neurons sensitive to protected categories like race and gender in models trained on datasets intended to obscure these features. Finally, we use MILAN for editing, improving robustness in an image classifier by deleting neurons sensitive to text features spuriously correlated with class labels. S.M. 2022-06-15T13:07:20Z 2022-06-15T13:07:20Z 2022-02 2022-03-04T20:59:43.130Z Thesis https://hdl.handle.net/1721.1/143251 0000-0002-8876-1781 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Hernandez, Evan
Natural Language Descriptions of Deep Visual Features
title Natural Language Descriptions of Deep Visual Features
title_full Natural Language Descriptions of Deep Visual Features
title_fullStr Natural Language Descriptions of Deep Visual Features
title_full_unstemmed Natural Language Descriptions of Deep Visual Features
title_short Natural Language Descriptions of Deep Visual Features
title_sort natural language descriptions of deep visual features
url https://hdl.handle.net/1721.1/143251
work_keys_str_mv AT hernandezevan naturallanguagedescriptionsofdeepvisualfeatures