Learning embeddings into entropic Wasserstein spaces

© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions. Here we consider an alternative, which e...

Full description

Bibliographic Details
Main Authors: Frogner, C, Solomon, J, Mirzazadeh, F
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/137728
_version_ 1826196065796227072
author Frogner, C
Solomon, J
Mirzazadeh, F
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Frogner, C
Solomon, J
Mirzazadeh, F
author_sort Frogner, C
collection MIT
description © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions. Here we consider an alternative, which embeds data as discrete probability distributions in a Wasserstein space, endowed with an optimal transport metric. Wasserstein spaces are much larger and more flexible than Euclidean spaces, in that they can successfully embed a wider variety of metric structures. We exploit this flexibility by learning an embedding that captures semantic information in the Wasserstein distance between embedded distributions. We examine empirically the representational capacity of our learned Wasserstein embeddings, showing that they can embed a wide variety of metric structures with smaller distortion than an equivalent Euclidean embedding. We also investigate an application to word embedding, demonstrating a unique advantage of Wasserstein embeddings: We can visualize the high-dimensional embedding directly, since it is a probability distribution on a low-dimensional space. This obviates the need for dimensionality reduction techniques like t-SNE for visualization.
first_indexed 2024-09-23T10:20:19Z
format Article
id mit-1721.1/137728
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T10:20:19Z
publishDate 2021
record_format dspace
spelling mit-1721.1/1377282023-02-09T19:14:29Z Learning embeddings into entropic Wasserstein spaces Frogner, C Solomon, J Mirzazadeh, F Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory MIT-IBM Watson AI Lab © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions. Here we consider an alternative, which embeds data as discrete probability distributions in a Wasserstein space, endowed with an optimal transport metric. Wasserstein spaces are much larger and more flexible than Euclidean spaces, in that they can successfully embed a wider variety of metric structures. We exploit this flexibility by learning an embedding that captures semantic information in the Wasserstein distance between embedded distributions. We examine empirically the representational capacity of our learned Wasserstein embeddings, showing that they can embed a wide variety of metric structures with smaller distortion than an equivalent Euclidean embedding. We also investigate an application to word embedding, demonstrating a unique advantage of Wasserstein embeddings: We can visualize the high-dimensional embedding directly, since it is a probability distribution on a low-dimensional space. This obviates the need for dimensionality reduction techniques like t-SNE for visualization. 2021-11-08T17:43:35Z 2021-11-08T17:43:35Z 2019-05 2021-01-26T17:28:05Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137728 Frogner, C, Solomon, J and Mirzazadeh, F. 2019. "Learning embeddings into entropic Wasserstein spaces." 7th International Conference on Learning Representations, ICLR 2019. en https://openreview.net/group?id=ICLR.cc/2019/Conference#accepted-poster-papers 7th International Conference on Learning Representations, ICLR 2019 Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf arXiv
spellingShingle Frogner, C
Solomon, J
Mirzazadeh, F
Learning embeddings into entropic Wasserstein spaces
title Learning embeddings into entropic Wasserstein spaces
title_full Learning embeddings into entropic Wasserstein spaces
title_fullStr Learning embeddings into entropic Wasserstein spaces
title_full_unstemmed Learning embeddings into entropic Wasserstein spaces
title_short Learning embeddings into entropic Wasserstein spaces
title_sort learning embeddings into entropic wasserstein spaces
url https://hdl.handle.net/1721.1/137728
work_keys_str_mv AT frognerc learningembeddingsintoentropicwassersteinspaces
AT solomonj learningembeddingsintoentropicwassersteinspaces
AT mirzazadehf learningembeddingsintoentropicwassersteinspaces