Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models
Contextual bandits can solve a huge range of real-world problems. However, current popular algorithms to solve them either rely on linear models or unreliable uncertainty estimation in non-linear models, which are required to deal with the exploration–exploitation trade-off. Inspired by theories of...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-01-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/25/2/188 |
_version_ | 1827757618296258560 |
---|---|
author | Adam Elwood Marco Leonardi Ashraf Mohamed Alessandro Rozza |
author_facet | Adam Elwood Marco Leonardi Ashraf Mohamed Alessandro Rozza |
author_sort | Adam Elwood |
collection | DOAJ |
description | Contextual bandits can solve a huge range of real-world problems. However, current popular algorithms to solve them either rely on linear models or unreliable uncertainty estimation in non-linear models, which are required to deal with the exploration–exploitation trade-off. Inspired by theories of human cognition, we introduce novel techniques that use maximum entropy exploration, relying on neural networks to find optimal policies in settings with both continuous and discrete action spaces. We present two classes of models, one with neural networks as reward estimators, and the other with energy based models, which model the probability of obtaining an optimal reward given an action. We evaluate the performance of these models in static and dynamic contextual bandit simulation environments. We show that both techniques outperform standard baseline algorithms, such as NN HMC, NN Discrete, Upper Confidence Bound, and Thompson Sampling, where energy based models have the best overall performance. This provides practitioners with new techniques that perform well in static and dynamic settings, and are particularly well suited to non-linear scenarios with continuous action spaces. |
first_indexed | 2024-03-11T08:52:01Z |
format | Article |
id | doaj.art-9894cdbeeb1d45ed9d7ed600fdc99920 |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-11T08:52:01Z |
publishDate | 2023-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-9894cdbeeb1d45ed9d7ed600fdc999202023-11-16T20:22:01ZengMDPI AGEntropy1099-43002023-01-0125218810.3390/e25020188Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based ModelsAdam Elwood0Marco Leonardi1Ashraf Mohamed2Alessandro Rozza3lastminute.com Group, Vicolo de Calvi, 2, 6830 Chiasso, Switzerlandlastminute.com Group, Vicolo de Calvi, 2, 6830 Chiasso, Switzerlandlastminute.com Group, Vicolo de Calvi, 2, 6830 Chiasso, Switzerlandlastminute.com Group, Vicolo de Calvi, 2, 6830 Chiasso, SwitzerlandContextual bandits can solve a huge range of real-world problems. However, current popular algorithms to solve them either rely on linear models or unreliable uncertainty estimation in non-linear models, which are required to deal with the exploration–exploitation trade-off. Inspired by theories of human cognition, we introduce novel techniques that use maximum entropy exploration, relying on neural networks to find optimal policies in settings with both continuous and discrete action spaces. We present two classes of models, one with neural networks as reward estimators, and the other with energy based models, which model the probability of obtaining an optimal reward given an action. We evaluate the performance of these models in static and dynamic contextual bandit simulation environments. We show that both techniques outperform standard baseline algorithms, such as NN HMC, NN Discrete, Upper Confidence Bound, and Thompson Sampling, where energy based models have the best overall performance. This provides practitioners with new techniques that perform well in static and dynamic settings, and are particularly well suited to non-linear scenarios with continuous action spaces.https://www.mdpi.com/1099-4300/25/2/188machine learningmulti-armed banditThompson Samplingenergy based models |
spellingShingle | Adam Elwood Marco Leonardi Ashraf Mohamed Alessandro Rozza Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models Entropy machine learning multi-armed bandit Thompson Sampling energy based models |
title | Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models |
title_full | Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models |
title_fullStr | Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models |
title_full_unstemmed | Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models |
title_short | Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models |
title_sort | maximum entropy exploration in contextual bandits with neural networks and energy based models |
topic | machine learning multi-armed bandit Thompson Sampling energy based models |
url | https://www.mdpi.com/1099-4300/25/2/188 |
work_keys_str_mv | AT adamelwood maximumentropyexplorationincontextualbanditswithneuralnetworksandenergybasedmodels AT marcoleonardi maximumentropyexplorationincontextualbanditswithneuralnetworksandenergybasedmodels AT ashrafmohamed maximumentropyexplorationincontextualbanditswithneuralnetworksandenergybasedmodels AT alessandrorozza maximumentropyexplorationincontextualbanditswithneuralnetworksandenergybasedmodels |