Pareto-optimal data compression for binary classification tasks

The goal of lossy data compression is to reduce the storage cost of a data set X while retaining as much information as possible about something (Y) that you care about. For example, what aspects of an image X contain the most information about whether it depicts a cat? Mathematically, this correspo...

Full description

Bibliographic Details
Main Authors: Tegmark, Max Erik, Wu, Tailin
Other Authors: Massachusetts Institute of Technology. Department of Physics
Format: Article
Published: Multidisciplinary Digital Publishing Institute 2020
Online Access:https://hdl.handle.net/1721.1/125546
_version_ 1826201906038439936
author Tegmark, Max Erik
Wu, Tailin
author2 Massachusetts Institute of Technology. Department of Physics
author_facet Massachusetts Institute of Technology. Department of Physics
Tegmark, Max Erik
Wu, Tailin
author_sort Tegmark, Max Erik
collection MIT
description The goal of lossy data compression is to reduce the storage cost of a data set X while retaining as much information as possible about something (Y) that you care about. For example, what aspects of an image X contain the most information about whether it depicts a cat? Mathematically, this corresponds to finding a mapping X→Z≡f(X) that maximizes the mutual information I(Z,Y) while the entropy H(Z) is kept below some fixed threshold. We present a new method for mapping out the Pareto frontier for classification tasks, reflecting the tradeoff between retained entropy and class information. We first show how a random variable X (an image, say) drawn from a class Y∈{1,…,n} can be distilled into a vector W=f(X)∈Rn−1 losslessly, so that I(W,Y)=I(X,Y) ; for example, for a binary classification task of cats and dogs, each image X is mapped into a single real number W retaining all information that helps distinguish cats from dogs. For the n=2 case of binary classification, we then show how W can be further compressed into a discrete variable Z=gβ(W)∈{1,…,mβ} by binning W into mβ bins, in such a way that varying the parameter β sweeps out the full Pareto frontier, solving a generalization of the discrete information bottleneck (DIB) problem. We argue that the most interesting points on this frontier are “corners” maximizing I(Z,Y) for a fixed number of bins m=2,3,… which can conveniently be found without multiobjective optimization. We apply this method to the CIFAR-10, MNIST and Fashion-MNIST datasets, illustrating how it can be interpreted as an information-theoretically optimal image clustering algorithm. We find that these Pareto frontiers are not concave, and that recently reported DIB phase transitions correspond to transitions between these corners, changing the number of clusters. Keywords: information; bottleneck; compression; classification
first_indexed 2024-09-23T11:58:43Z
format Article
id mit-1721.1/125546
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T11:58:43Z
publishDate 2020
publisher Multidisciplinary Digital Publishing Institute
record_format dspace
spelling mit-1721.1/1255462022-09-27T23:13:33Z Pareto-optimal data compression for binary classification tasks Tegmark, Max Erik Wu, Tailin Massachusetts Institute of Technology. Department of Physics MIT Kavli Institute for Astrophysics and Space Research The goal of lossy data compression is to reduce the storage cost of a data set X while retaining as much information as possible about something (Y) that you care about. For example, what aspects of an image X contain the most information about whether it depicts a cat? Mathematically, this corresponds to finding a mapping X→Z≡f(X) that maximizes the mutual information I(Z,Y) while the entropy H(Z) is kept below some fixed threshold. We present a new method for mapping out the Pareto frontier for classification tasks, reflecting the tradeoff between retained entropy and class information. We first show how a random variable X (an image, say) drawn from a class Y∈{1,…,n} can be distilled into a vector W=f(X)∈Rn−1 losslessly, so that I(W,Y)=I(X,Y) ; for example, for a binary classification task of cats and dogs, each image X is mapped into a single real number W retaining all information that helps distinguish cats from dogs. For the n=2 case of binary classification, we then show how W can be further compressed into a discrete variable Z=gβ(W)∈{1,…,mβ} by binning W into mβ bins, in such a way that varying the parameter β sweeps out the full Pareto frontier, solving a generalization of the discrete information bottleneck (DIB) problem. We argue that the most interesting points on this frontier are “corners” maximizing I(Z,Y) for a fixed number of bins m=2,3,… which can conveniently be found without multiobjective optimization. We apply this method to the CIFAR-10, MNIST and Fashion-MNIST datasets, illustrating how it can be interpreted as an information-theoretically optimal image clustering algorithm. We find that these Pareto frontiers are not concave, and that recently reported DIB phase transitions correspond to transitions between these corners, changing the number of clusters. Keywords: information; bottleneck; compression; classification TWCF (grant no. 0322) 2020-05-28T15:16:20Z 2020-05-28T15:16:20Z 2019-12-19 2019-10 2020-03-02T13:00:09Z Article http://purl.org/eprint/type/JournalArticle 1099-4300 https://hdl.handle.net/1721.1/125546 Tegmark, Max, and Tailin Wu, "Pareto-optimal data compression for binary classification tasks." Entropy 22, 1 (Dec. 2019): no. 7 doi 10.3390/e22010007 ©2019 Author(s) 10.3390/e22010007 Entropy Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ application/pdf Multidisciplinary Digital Publishing Institute Multidisciplinary Digital Publishing Institute
spellingShingle Tegmark, Max Erik
Wu, Tailin
Pareto-optimal data compression for binary classification tasks
title Pareto-optimal data compression for binary classification tasks
title_full Pareto-optimal data compression for binary classification tasks
title_fullStr Pareto-optimal data compression for binary classification tasks
title_full_unstemmed Pareto-optimal data compression for binary classification tasks
title_short Pareto-optimal data compression for binary classification tasks
title_sort pareto optimal data compression for binary classification tasks
url https://hdl.handle.net/1721.1/125546
work_keys_str_mv AT tegmarkmaxerik paretooptimaldatacompressionforbinaryclassificationtasks
AT wutailin paretooptimaldatacompressionforbinaryclassificationtasks