Principal differences analysis: Interpretable characterization of differences between distributions
We introduce principal differences analysis (PDA) for analyzing differences between high-dimensional distributions. The method operates by finding the projection that maximizes the Wasserstein divergence between the resulting univariate populations. Relying on the Cramer-Wold device, it requires no...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Neural Information Processing Systems Foundation, Inc.
2018
|
Online Access: | http://hdl.handle.net/1721.1/115931 https://orcid.org/0000-0002-7164-903X https://orcid.org/0000-0002-2199-0379 |
_version_ | 1826200304041852928 |
---|---|
author | Mueller, Jonas Weylin Jaakkola, Tommi S |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Mueller, Jonas Weylin Jaakkola, Tommi S |
author_sort | Mueller, Jonas Weylin |
collection | MIT |
description | We introduce principal differences analysis (PDA) for analyzing differences between high-dimensional distributions. The method operates by finding the projection that maximizes the Wasserstein divergence between the resulting univariate populations. Relying on the Cramer-Wold device, it requires no assumptions about the form of the underlying distributions, nor the nature of their inter-class differences. A sparse variant of the method is introduced to identify features responsible for the differences. We provide algorithms for both the original minimax formulation as well as its semidefinite relaxation. In addition to deriving some convergence results, we illustrate how the approach may be applied to identify differences between cell populations in the somatosensory cortex and hippocampus as manifested by single cell RNA-seq. Our broader framework extends beyond the specific choice of Wasserstein divergence. |
first_indexed | 2024-09-23T11:34:25Z |
format | Article |
id | mit-1721.1/115931 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T11:34:25Z |
publishDate | 2018 |
publisher | Neural Information Processing Systems Foundation, Inc. |
record_format | dspace |
spelling | mit-1721.1/1159312022-10-01T04:33:13Z Principal differences analysis: Interpretable characterization of differences between distributions Mueller, Jonas Weylin Jaakkola, Tommi S Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Mueller, Jonas Weylin Jaakkola, Tommi S We introduce principal differences analysis (PDA) for analyzing differences between high-dimensional distributions. The method operates by finding the projection that maximizes the Wasserstein divergence between the resulting univariate populations. Relying on the Cramer-Wold device, it requires no assumptions about the form of the underlying distributions, nor the nature of their inter-class differences. A sparse variant of the method is introduced to identify features responsible for the differences. We provide algorithms for both the original minimax formulation as well as its semidefinite relaxation. In addition to deriving some convergence results, we illustrate how the approach may be applied to identify differences between cell populations in the somatosensory cortex and hippocampus as manifested by single cell RNA-seq. Our broader framework extends beyond the specific choice of Wasserstein divergence. National Institutes of Health (U.S.) (Grant T32HG004947) 2018-05-29T14:34:53Z 2018-05-29T14:34:53Z 2015-12 Article http://purl.org/eprint/type/ConferencePaper http://hdl.handle.net/1721.1/115931 Mueller, Jonas and Tommi Jaakkola. "Principal Differences Analysis: Interpretable Characterization of Differences between Distributions." Advances in Neural Information Processing Systems 28 (NIPS 2015), 7-12 December, 2015, Montreal Canada, NIPS, 2015. https://orcid.org/0000-0002-7164-903X https://orcid.org/0000-0002-2199-0379 en_US https://papers.nips.cc/paper/5894-principal-differences-analysis-interpretable-characterization-of-differences-between-distributions Advances in Neural Information Processing Systems 28 (NIPS 2015) Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Neural Information Processing Systems Foundation, Inc. Neural Information Processing Systems (NIPS) |
spellingShingle | Mueller, Jonas Weylin Jaakkola, Tommi S Principal differences analysis: Interpretable characterization of differences between distributions |
title | Principal differences analysis: Interpretable characterization of differences between distributions |
title_full | Principal differences analysis: Interpretable characterization of differences between distributions |
title_fullStr | Principal differences analysis: Interpretable characterization of differences between distributions |
title_full_unstemmed | Principal differences analysis: Interpretable characterization of differences between distributions |
title_short | Principal differences analysis: Interpretable characterization of differences between distributions |
title_sort | principal differences analysis interpretable characterization of differences between distributions |
url | http://hdl.handle.net/1721.1/115931 https://orcid.org/0000-0002-7164-903X https://orcid.org/0000-0002-2199-0379 |
work_keys_str_mv | AT muellerjonasweylin principaldifferencesanalysisinterpretablecharacterizationofdifferencesbetweendistributions AT jaakkolatommis principaldifferencesanalysisinterpretablecharacterizationofdifferencesbetweendistributions |