Output-weighted sampling for multi-armed bandits with extreme payoffs
We present a new type of acquisition function for online decision-making in multi-armed and contextual bandit problems with extreme payoffs. Specifically, we model the payoff function as a Gaussian process and formulate a novel type of upper confidence bound acquisition function that guides explorat...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
The Royal Society
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/1721.1/154219 |
_version_ | 1811095353052626944 |
---|---|
author | Yang, Yibo Blanchard, Antoine Sapsis, Themistoklis Perdikaris, Paris |
author_facet | Yang, Yibo Blanchard, Antoine Sapsis, Themistoklis Perdikaris, Paris |
author_sort | Yang, Yibo |
collection | MIT |
description | We present a new type of acquisition function for online decision-making in multi-armed and contextual bandit problems with extreme payoffs. Specifically, we model the payoff function as a Gaussian process and formulate a novel type of upper confidence bound acquisition function that guides exploration towards the bandits that are deemed most relevant according to the variability of the observed rewards. This is achieved by computing a tractable likelihood ratio that quantifies the importance of the output relative to the inputs and essentially acts as an<jats:italic>attention mechanism</jats:italic>that promotes exploration of extreme rewards. Our formulation is supported by asymptotic zero-regret guarantees, and its performance is demonstrated across several synthetic benchmarks, as well as two realistic examples involving noisy sensor network data. Finally, we provide a JAX library for efficient bandit optimization using Gaussian processes. |
first_indexed | 2024-09-23T16:15:24Z |
format | Article |
id | mit-1721.1/154219 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T16:15:24Z |
publishDate | 2024 |
publisher | The Royal Society |
record_format | dspace |
spelling | mit-1721.1/1542192024-09-16T04:42:09Z Output-weighted sampling for multi-armed bandits with extreme payoffs Yang, Yibo Blanchard, Antoine Sapsis, Themistoklis Perdikaris, Paris General Physics and Astronomy General Engineering General Mathematics We present a new type of acquisition function for online decision-making in multi-armed and contextual bandit problems with extreme payoffs. Specifically, we model the payoff function as a Gaussian process and formulate a novel type of upper confidence bound acquisition function that guides exploration towards the bandits that are deemed most relevant according to the variability of the observed rewards. This is achieved by computing a tractable likelihood ratio that quantifies the importance of the output relative to the inputs and essentially acts as an<jats:italic>attention mechanism</jats:italic>that promotes exploration of extreme rewards. Our formulation is supported by asymptotic zero-regret guarantees, and its performance is demonstrated across several synthetic benchmarks, as well as two realistic examples involving noisy sensor network data. Finally, we provide a JAX library for efficient bandit optimization using Gaussian processes. 2024-04-18T20:50:22Z 2024-04-18T20:50:22Z 2022-04 2024-04-18T20:37:28Z Article http://purl.org/eprint/type/JournalArticle 1364-5021 1471-2946 https://hdl.handle.net/1721.1/154219 Yang Yibo, Blanchard Antoine, Sapsis Themistoklis and Perdikaris Paris 2022Output-weighted sampling for multi-armed bandits with extreme payoffsProc. R. Soc. A.47820210781. en 10.1098/rspa.2021.0781 Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf The Royal Society The Royal Society |
spellingShingle | General Physics and Astronomy General Engineering General Mathematics Yang, Yibo Blanchard, Antoine Sapsis, Themistoklis Perdikaris, Paris Output-weighted sampling for multi-armed bandits with extreme payoffs |
title | Output-weighted sampling for multi-armed bandits with extreme payoffs |
title_full | Output-weighted sampling for multi-armed bandits with extreme payoffs |
title_fullStr | Output-weighted sampling for multi-armed bandits with extreme payoffs |
title_full_unstemmed | Output-weighted sampling for multi-armed bandits with extreme payoffs |
title_short | Output-weighted sampling for multi-armed bandits with extreme payoffs |
title_sort | output weighted sampling for multi armed bandits with extreme payoffs |
topic | General Physics and Astronomy General Engineering General Mathematics |
url | https://hdl.handle.net/1721.1/154219 |
work_keys_str_mv | AT yangyibo outputweightedsamplingformultiarmedbanditswithextremepayoffs AT blanchardantoine outputweightedsamplingformultiarmedbanditswithextremepayoffs AT sapsisthemistoklis outputweightedsamplingformultiarmedbanditswithextremepayoffs AT perdikarisparis outputweightedsamplingformultiarmedbanditswithextremepayoffs |