Nonstationary Stochastic Bandits: UCB Policies and Minimax Regret

We study the nonstationary stochastic Multi-Armed Bandit (MAB) problem in which the distributions of rewards associated with arms are assumed to be time-varying and the total variation in the expected rewards is subject to a variation budget. The regret of a policy is defined by the difference in th...

Full description

Bibliographic Details
Main Authors: Lai Wei, Vaibhav Srivastava
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Open Journal of Control Systems
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10460198/