Learning determinantal point processes by corrective negative sampling
Determinantal Point Processes (DPPs) have attracted significant interest from the machine-learning community due to their ability to elegantly and tractably model the delicate balance between quality and diversity of sets. DPPs are commonly learned from data using maximum likelihood estimation (MLE)...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
MLResearch Press
2021
|
Online Access: | https://hdl.handle.net/1721.1/130415 |
_version_ | 1826196367634071552 |
---|---|
author | Mariet, Zelda Gartrell, Mike Sra, Suvrit |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Mariet, Zelda Gartrell, Mike Sra, Suvrit |
author_sort | Mariet, Zelda |
collection | MIT |
description | Determinantal Point Processes (DPPs) have attracted significant interest from the machine-learning community due to their ability to elegantly and tractably model the delicate balance between quality and diversity of sets. DPPs are commonly learned from data using maximum likelihood estimation (MLE). While fitting observed sets well, MLE for DPPs may also assign high likelihoods to unobserved sets that are far from the true generative distribution of the data. To address this issue, which reduces the quality of the learned model, we introduce a novel optimization problem, Contrastive Estimation (CE), which encodes information about “negative” samples into the basic learning model. CE is grounded in the successful use of negative information in machine-vision and language modeling. Depending on the chosen negative distribution (which may be static or evolve during optimization), CE assumes two different forms, which we analyze theoretically and experimentally. We evaluate our new model on real-world datasets; on a challenging dataset, CE learning delivers a considerable improvement in predictive performance over a DPP learned without using contrastive information. |
first_indexed | 2024-09-23T10:25:47Z |
format | Article |
id | mit-1721.1/130415 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T10:25:47Z |
publishDate | 2021 |
publisher | MLResearch Press |
record_format | dspace |
spelling | mit-1721.1/1304152022-09-26T17:49:30Z Learning determinantal point processes by corrective negative sampling Mariet, Zelda Gartrell, Mike Sra, Suvrit Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Determinantal Point Processes (DPPs) have attracted significant interest from the machine-learning community due to their ability to elegantly and tractably model the delicate balance between quality and diversity of sets. DPPs are commonly learned from data using maximum likelihood estimation (MLE). While fitting observed sets well, MLE for DPPs may also assign high likelihoods to unobserved sets that are far from the true generative distribution of the data. To address this issue, which reduces the quality of the learned model, we introduce a novel optimization problem, Contrastive Estimation (CE), which encodes information about “negative” samples into the basic learning model. CE is grounded in the successful use of negative information in machine-vision and language modeling. Depending on the chosen negative distribution (which may be static or evolve during optimization), CE assumes two different forms, which we analyze theoretically and experimentally. We evaluate our new model on real-world datasets; on a challenging dataset, CE learning delivers a considerable improvement in predictive performance over a DPP learned without using contrastive information. 2021-04-08T15:13:46Z 2021-04-08T15:13:46Z 2019-04 2021-04-07T12:26:31Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/130415 Mariet, Zelda et al. "Learning determinantal point processes by corrective negative sampling." 22nd International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 89, MLResearch Pressh, 2019, 2251-2260. © 2019 The Author(s). en http://proceedings.mlr.press/v89/mariet19b.html 22nd International Conference on Artificial Intelligence and Statistics Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf MLResearch Press Proceedings of Machine Learning Research |
spellingShingle | Mariet, Zelda Gartrell, Mike Sra, Suvrit Learning determinantal point processes by corrective negative sampling |
title | Learning determinantal point processes by corrective negative sampling |
title_full | Learning determinantal point processes by corrective negative sampling |
title_fullStr | Learning determinantal point processes by corrective negative sampling |
title_full_unstemmed | Learning determinantal point processes by corrective negative sampling |
title_short | Learning determinantal point processes by corrective negative sampling |
title_sort | learning determinantal point processes by corrective negative sampling |
url | https://hdl.handle.net/1721.1/130415 |
work_keys_str_mv | AT marietzelda learningdeterminantalpointprocessesbycorrectivenegativesampling AT gartrellmike learningdeterminantalpointprocessesbycorrectivenegativesampling AT srasuvrit learningdeterminantalpointprocessesbycorrectivenegativesampling |