Multi-Armed Bandit Regularized Expected Improvement for Efficient Global Optimization of Expensive Computer Experiments With Low Noise
Computer experiments are widely used to mimic expensive physical processes as black-box functions. A typical challenge of expensive computer experiments is to find the set of inputs that produce the desired response. This study proposes a multi-armed bandit regularized expected improvement (BREI) me...
Main Authors: | Rajitha Meka, Adel Alaeddini, Chinonso Ovuegbe, Pranav A. Bhounsule, Peyman Najafirad, Kai Yang |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9477602/ |
Similar Items
-
Signal detection models as contextual bandits
by: Thomas N. Sherratt, et al.
Published: (2023-06-01) -
Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm
by: Emanuele Cavenaghi, et al.
Published: (2021-03-01) -
Multi-armed linear bandits with latent biases
by: Kang, Qiyu, et al.
Published: (2024) -
Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models
by: Adam Elwood, et al.
Published: (2023-01-01) -
Multi-arm bandit-led clustering in federated learning
by: Zhao, Joe Chen Xuan
Published: (2024)