Global Continuous Optimization with Error Bound and Fast Convergence

This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning wit...

Full description

Bibliographic Details
Main Authors: Maruyama, Yu, Zheng, Xiaoyu, Kawaguchi, Kenji
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:en_US
Published: Association for the Advancement of Artificial Intelligence 2017
Online Access:http://hdl.handle.net/1721.1/107756
https://orcid.org/0000-0003-1839-7504
_version_ 1826197594013958144
author Maruyama, Yu
Zheng, Xiaoyu
Kawaguchi, Kenji
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Maruyama, Yu
Zheng, Xiaoyu
Kawaguchi, Kenji
author_sort Maruyama, Yu
collection MIT
description This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method.
first_indexed 2024-09-23T10:50:20Z
format Article
id mit-1721.1/107756
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T10:50:20Z
publishDate 2017
publisher Association for the Advancement of Artificial Intelligence
record_format dspace
spelling mit-1721.1/1077562022-09-27T15:23:00Z Global Continuous Optimization with Error Bound and Fast Convergence Maruyama, Yu Zheng, Xiaoyu Kawaguchi, Kenji Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Kawaguchi, Kenji This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method. 2017-03-28T17:19:32Z 2017-03-28T17:19:32Z 2016-06 2015-03 Article http://purl.org/eprint/type/JournalArticle 1943-5037 1076-9757 http://hdl.handle.net/1721.1/107756 Kawaguchi, Kenji, Yu Maruyama and Xiaoyu Zheng. "Global Continuous Optimization with Error Bound and Fast Convergence." Journal of Articial Intelligence Research 56 (2016): 153-195. https://orcid.org/0000-0003-1839-7504 en_US http://dx.doi.org/10.1613/jair.4742 Journal of Artificial Intelligence Research Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Association for the Advancement of Artificial Intelligence AAAI
spellingShingle Maruyama, Yu
Zheng, Xiaoyu
Kawaguchi, Kenji
Global Continuous Optimization with Error Bound and Fast Convergence
title Global Continuous Optimization with Error Bound and Fast Convergence
title_full Global Continuous Optimization with Error Bound and Fast Convergence
title_fullStr Global Continuous Optimization with Error Bound and Fast Convergence
title_full_unstemmed Global Continuous Optimization with Error Bound and Fast Convergence
title_short Global Continuous Optimization with Error Bound and Fast Convergence
title_sort global continuous optimization with error bound and fast convergence
url http://hdl.handle.net/1721.1/107756
https://orcid.org/0000-0003-1839-7504
work_keys_str_mv AT maruyamayu globalcontinuousoptimizationwitherrorboundandfastconvergence
AT zhengxiaoyu globalcontinuousoptimizationwitherrorboundandfastconvergence
AT kawaguchikenji globalcontinuousoptimizationwitherrorboundandfastconvergence