Sparse choice models

Choice models, which capture popular preferences over objects of interest, play a key role in making decisions whose eventual outcome is impacted by human choice behavior. In most scenarios, the choice model, which can effectively be viewed as a distribution over permutations, must be learned from o...

Full description

Bibliographic Details
Main Authors: Farias, Vivek F., Jagabathula, Srikanth, Shah, Devavrat
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2014
Online Access:http://hdl.handle.net/1721.1/87680
https://orcid.org/0000-0002-5856-9246
https://orcid.org/0000-0003-0737-3259
_version_ 1826192761884246016
author Farias, Vivek F.
Jagabathula, Srikanth
Shah, Devavrat
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Farias, Vivek F.
Jagabathula, Srikanth
Shah, Devavrat
author_sort Farias, Vivek F.
collection MIT
description Choice models, which capture popular preferences over objects of interest, play a key role in making decisions whose eventual outcome is impacted by human choice behavior. In most scenarios, the choice model, which can effectively be viewed as a distribution over permutations, must be learned from observed data. The observed data, in turn, may frequently be viewed as (partial, noisy) information about marginals of this distribution over permutations. As such, the search for an appropriate choice model boils down to learning a distribution over permutations that is (near-)consistent with observed information about this distribution. In this work, we pursue a non-parametric approach which seeks to learn a choice model (i.e. a distribution over permutations) with sparsest possible support, and consistent with observed data. We assume that the data observed consists of noisy information pertaining to the marginals of the choice model we seek to learn. We establish that any choice model admits a `very' sparse approximation in the sense that there exists a choice model whose support is small relative to the dimension of the observed data and whose marginals approximately agree with the observed marginal information. We further show that under, what we dub, `signature' conditions, such a sparse approximation can be found in a computationally efficiently fashion relative to a brute force approach. An empirical study using the American Psychological Association election data-set suggests that our approach manages to unearth useful structural properties of the underlying choice model using the sparse approximation found. Our results further suggest that the signature condition is a potential alternative to the recently popularized Restricted Null Space condition for efficient recovery of sparse models.
first_indexed 2024-09-23T09:28:49Z
format Article
id mit-1721.1/87680
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T09:28:49Z
publishDate 2014
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/876802022-09-30T14:39:34Z Sparse choice models Farias, Vivek F. Jagabathula, Srikanth Shah, Devavrat Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Sloan School of Management Farias, Vivek F. Shah, Devavrat Choice models, which capture popular preferences over objects of interest, play a key role in making decisions whose eventual outcome is impacted by human choice behavior. In most scenarios, the choice model, which can effectively be viewed as a distribution over permutations, must be learned from observed data. The observed data, in turn, may frequently be viewed as (partial, noisy) information about marginals of this distribution over permutations. As such, the search for an appropriate choice model boils down to learning a distribution over permutations that is (near-)consistent with observed information about this distribution. In this work, we pursue a non-parametric approach which seeks to learn a choice model (i.e. a distribution over permutations) with sparsest possible support, and consistent with observed data. We assume that the data observed consists of noisy information pertaining to the marginals of the choice model we seek to learn. We establish that any choice model admits a `very' sparse approximation in the sense that there exists a choice model whose support is small relative to the dimension of the observed data and whose marginals approximately agree with the observed marginal information. We further show that under, what we dub, `signature' conditions, such a sparse approximation can be found in a computationally efficiently fashion relative to a brute force approach. An empirical study using the American Psychological Association election data-set suggests that our approach manages to unearth useful structural properties of the underlying choice model using the sparse approximation found. Our results further suggest that the signature condition is a potential alternative to the recently popularized Restricted Null Space condition for efficient recovery of sparse models. National Science Foundation (U.S.) (NSF CMMI project) 2014-06-06T15:23:53Z 2014-06-06T15:23:53Z 2012-03 Article http://purl.org/eprint/type/ConferencePaper 978-1-4673-3140-1 978-1-4673-3139-5 978-1-4673-3138-8 http://hdl.handle.net/1721.1/87680 Farias, Vivek F., Srikanth Jagabathula, and Devavrat Shah. “Sparse Choice Models.” 2012 46th Annual Conference on Information Sciences and Systems (CISS), March 21-23, 2012, Princeton University, Princeton NJ, USA. p.1-28. https://orcid.org/0000-0002-5856-9246 https://orcid.org/0000-0003-0737-3259 en_US http://dx.doi.org/10.1109/CISS.2012.6310952 Proceedings of the 2012 46th Annual Conference on Information Sciences and Systems (CISS) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv
spellingShingle Farias, Vivek F.
Jagabathula, Srikanth
Shah, Devavrat
Sparse choice models
title Sparse choice models
title_full Sparse choice models
title_fullStr Sparse choice models
title_full_unstemmed Sparse choice models
title_short Sparse choice models
title_sort sparse choice models
url http://hdl.handle.net/1721.1/87680
https://orcid.org/0000-0002-5856-9246
https://orcid.org/0000-0003-0737-3259
work_keys_str_mv AT fariasvivekf sparsechoicemodels
AT jagabathulasrikanth sparsechoicemodels
AT shahdevavrat sparsechoicemodels