Optimizing Bayesian optimization

<p>We are concerned primarily with improving the practical applicability of Bayesian optimization. We make contributions in three key areas. </p> <p>We develop an intuitive online stopping criterion, allowing only as many steps as necessary to achieve the desired target to be take...

Full description

Bibliographic Details
Main Author: McLeod, M
Other Authors: Roberts, S
Format: Thesis
Language:English
Published: 2018
Subjects:
_version_ 1826295208777613312
author McLeod, M
author2 Roberts, S
author_facet Roberts, S
McLeod, M
author_sort McLeod, M
collection OXFORD
description <p>We are concerned primarily with improving the practical applicability of Bayesian optimization. We make contributions in three key areas. </p> <p>We develop an intuitive online stopping criterion, allowing only as many steps as necessary to achieve the desired target to be taken. By combining this with intelligent online switching between acquisition functions and pure local optimization we are also able to substantially improve convergence to the local minimum associated with our final solution.</p> <p>In cases where a continuum of reduced cost, but also reduced accuracy, evaluations are available we develop a Bayesian Optimization acquisition function to select both the location and cost of each evaluation. We achieve this with lower overheads than previous methods, translating to a real increase in performance. Part of this improvement is achieved by way of a new, more efficient, method for generating support points to sample the minimum of a Gaussian process. Further, in the case that the reduced cost estimates are unbiased we show that a practical solution cannot exist in most cases without also taking into consideration both computational overheads and a restriction on available resources. Given this knowledge we then develop a method which provides a viable solution in this setting.</p> <p>Finally, we outline practical implementation details for Bayesian optimization which allow substantial reductions in the overhead costs without changing the theoretical properties of optimization. This is primarily achieved by use of adaptive quadrature to marginalize Gaussian process hyperparameters in place of the more common slice sampling approach.</p>
first_indexed 2024-03-07T03:57:30Z
format Thesis
id oxford-uuid:c35f26ba-07ec-4830-ac37-39b37d36a8b3
institution University of Oxford
language English
last_indexed 2024-03-07T03:57:30Z
publishDate 2018
record_format dspace
spelling oxford-uuid:c35f26ba-07ec-4830-ac37-39b37d36a8b32022-03-27T06:15:59ZOptimizing Bayesian optimizationThesishttp://purl.org/coar/resource_type/c_db06uuid:c35f26ba-07ec-4830-ac37-39b37d36a8b3Machine learningEnglishORA Deposit2018McLeod, MRoberts, SOsborne, M<p>We are concerned primarily with improving the practical applicability of Bayesian optimization. We make contributions in three key areas. </p> <p>We develop an intuitive online stopping criterion, allowing only as many steps as necessary to achieve the desired target to be taken. By combining this with intelligent online switching between acquisition functions and pure local optimization we are also able to substantially improve convergence to the local minimum associated with our final solution.</p> <p>In cases where a continuum of reduced cost, but also reduced accuracy, evaluations are available we develop a Bayesian Optimization acquisition function to select both the location and cost of each evaluation. We achieve this with lower overheads than previous methods, translating to a real increase in performance. Part of this improvement is achieved by way of a new, more efficient, method for generating support points to sample the minimum of a Gaussian process. Further, in the case that the reduced cost estimates are unbiased we show that a practical solution cannot exist in most cases without also taking into consideration both computational overheads and a restriction on available resources. Given this knowledge we then develop a method which provides a viable solution in this setting.</p> <p>Finally, we outline practical implementation details for Bayesian optimization which allow substantial reductions in the overhead costs without changing the theoretical properties of optimization. This is primarily achieved by use of adaptive quadrature to marginalize Gaussian process hyperparameters in place of the more common slice sampling approach.</p>
spellingShingle Machine learning
McLeod, M
Optimizing Bayesian optimization
title Optimizing Bayesian optimization
title_full Optimizing Bayesian optimization
title_fullStr Optimizing Bayesian optimization
title_full_unstemmed Optimizing Bayesian optimization
title_short Optimizing Bayesian optimization
title_sort optimizing bayesian optimization
topic Machine learning
work_keys_str_mv AT mcleodm optimizingbayesianoptimization