Support and invertibility in domain-invariant representations
Learning domain-invariant representations has become a popular approach to unsupervised domain adaptation and is often justified by invoking a particular suite of theoretical results. We argue that there are two significant flaws in such arguments. First, the results in question hold only for a fixe...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
International Machine Learning Society
2021
|
Online Access: | https://hdl.handle.net/1721.1/130356 |
_version_ | 1811071001842155520 |
---|---|
author | Johansson, Fredrik D. Sontag, David Alexander |
author2 | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
author_facet | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Johansson, Fredrik D. Sontag, David Alexander |
author_sort | Johansson, Fredrik D. |
collection | MIT |
description | Learning domain-invariant representations has become a popular approach to unsupervised domain adaptation and is often justified by invoking a particular suite of theoretical results. We argue that there are two significant flaws in such arguments. First, the results in question hold only for a fixed representation and do not account for information lost in non-invertible transformations. Second, domain invariance is often a far too strict requirement and does not always lead to consistent estimation, even under strong and favorable assumptions. In this work, we give generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility. In addition, we show that penalizing distance between densities is often wasteful and propose a bound based on measuring the extent to which the support of the source domain covers the target domain. We perform experiments on well-known benchmarks that illustrate the short-comings of current standard practice. |
first_indexed | 2024-09-23T08:44:34Z |
format | Article |
id | mit-1721.1/130356 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T08:44:34Z |
publishDate | 2021 |
publisher | International Machine Learning Society |
record_format | dspace |
spelling | mit-1721.1/1303562022-09-30T10:54:45Z Support and invertibility in domain-invariant representations Johansson, Fredrik D. Sontag, David Alexander Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Learning domain-invariant representations has become a popular approach to unsupervised domain adaptation and is often justified by invoking a particular suite of theoretical results. We argue that there are two significant flaws in such arguments. First, the results in question hold only for a fixed representation and do not account for information lost in non-invertible transformations. Second, domain invariance is often a far too strict requirement and does not always lead to consistent estimation, even under strong and favorable assumptions. In this work, we give generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility. In addition, we show that penalizing distance between densities is often wasteful and propose a bound based on measuring the extent to which the support of the source domain covers the target domain. We perform experiments on well-known benchmarks that illustrate the short-comings of current standard practice. United States. Office of Naval Research ( Award N00014-17-1-2791) 2021-04-05T14:00:53Z 2021-04-05T14:00:53Z 2019-04 2021-04-05T13:16:21Z Article http://purl.org/eprint/type/ConferencePaper 2640-3498 https://hdl.handle.net/1721.1/130356 Johansson, Fredrik D. et al. “Support and invertibility in domain-invariant representations.” Paper in the Proceedings of Machine Learning Research, 89, 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) 2019, Naha,Okinawa, Japa, April 16-18 2019, International Machine Learning Society: 527-536 # © 2019 The Author(s) en http://proceedings.mlr.press/v89/johansson19a.html Proceedings of Machine Learning Research Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf International Machine Learning Society Proceedings of Machine Learning Research |
spellingShingle | Johansson, Fredrik D. Sontag, David Alexander Support and invertibility in domain-invariant representations |
title | Support and invertibility in domain-invariant representations |
title_full | Support and invertibility in domain-invariant representations |
title_fullStr | Support and invertibility in domain-invariant representations |
title_full_unstemmed | Support and invertibility in domain-invariant representations |
title_short | Support and invertibility in domain-invariant representations |
title_sort | support and invertibility in domain invariant representations |
url | https://hdl.handle.net/1721.1/130356 |
work_keys_str_mv | AT johanssonfredrikd supportandinvertibilityindomaininvariantrepresentations AT sontagdavidalexander supportandinvertibilityindomaininvariantrepresentations |