Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications

© 2021 For practical engineering optimization problems, the design space is typically narrow, given all the real-world constraints. Reinforcement Learning (RL) has commonly been guided by stochastic algorithms to tune hyperparameters and leverage exploration. Conversely in this work, we propose a ru...

Full description

Bibliographic Details
Main Authors: Radaideh, Majdi I, Shirvan, Koroush
Other Authors: Massachusetts Institute of Technology. Department of Nuclear Science and Engineering
Format: Article
Language:English
Published: Elsevier BV 2021
Online Access:https://hdl.handle.net/1721.1/133486
_version_ 1826206005811216384
author Radaideh, Majdi I
Shirvan, Koroush
author2 Massachusetts Institute of Technology. Department of Nuclear Science and Engineering
author_facet Massachusetts Institute of Technology. Department of Nuclear Science and Engineering
Radaideh, Majdi I
Shirvan, Koroush
author_sort Radaideh, Majdi I
collection MIT
description © 2021 For practical engineering optimization problems, the design space is typically narrow, given all the real-world constraints. Reinforcement Learning (RL) has commonly been guided by stochastic algorithms to tune hyperparameters and leverage exploration. Conversely in this work, we propose a rule-based RL methodology to guide evolutionary algorithms (EA) in constrained optimization. First, RL proximal policy optimization agents are trained to master matching some of the problem rules/constraints, then RL is used to inject experiences to guide various evolutionary/stochastic algorithms such as genetic algorithms, simulated annealing, particle swarm optimization, differential evolution, and natural evolution strategies. Accordingly, we develop RL-guided EAs, which are benchmarked against their standalone counterparts. RL-guided EA in continuous optimization demonstrates significant improvement over standalone EA for two engineering benchmarks. The main problem analyzed is nuclear fuel assembly combinatorial optimization with high-dimensional and computationally expensive physics. The results demonstrate the ability of RL to efficiently learn the rules that nuclear fuel engineers follow to realize candidate solutions. Without these rules, the design space is large for RL/EA to find many candidates. With imposing the rule-based RL methodology, we found that RL-guided EA outperforms standalone algorithms by a wide margin, with >10 times improvement in exploration capabilities and computational efficiency. These insights imply that when facing a constrained problem with numerous local optima, RL can be useful in focusing the search space in the areas where expert knowledge has demonstrated merit, while evolutionary/stochastic algorithms utilize their exploratory features to improve the number of feasible solutions.
first_indexed 2024-09-23T13:22:30Z
format Article
id mit-1721.1/133486
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:22:30Z
publishDate 2021
publisher Elsevier BV
record_format dspace
spelling mit-1721.1/1334862023-12-08T20:25:21Z Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications Radaideh, Majdi I Shirvan, Koroush Massachusetts Institute of Technology. Department of Nuclear Science and Engineering © 2021 For practical engineering optimization problems, the design space is typically narrow, given all the real-world constraints. Reinforcement Learning (RL) has commonly been guided by stochastic algorithms to tune hyperparameters and leverage exploration. Conversely in this work, we propose a rule-based RL methodology to guide evolutionary algorithms (EA) in constrained optimization. First, RL proximal policy optimization agents are trained to master matching some of the problem rules/constraints, then RL is used to inject experiences to guide various evolutionary/stochastic algorithms such as genetic algorithms, simulated annealing, particle swarm optimization, differential evolution, and natural evolution strategies. Accordingly, we develop RL-guided EAs, which are benchmarked against their standalone counterparts. RL-guided EA in continuous optimization demonstrates significant improvement over standalone EA for two engineering benchmarks. The main problem analyzed is nuclear fuel assembly combinatorial optimization with high-dimensional and computationally expensive physics. The results demonstrate the ability of RL to efficiently learn the rules that nuclear fuel engineers follow to realize candidate solutions. Without these rules, the design space is large for RL/EA to find many candidates. With imposing the rule-based RL methodology, we found that RL-guided EA outperforms standalone algorithms by a wide margin, with >10 times improvement in exploration capabilities and computational efficiency. These insights imply that when facing a constrained problem with numerous local optima, RL can be useful in focusing the search space in the areas where expert knowledge has demonstrated merit, while evolutionary/stochastic algorithms utilize their exploratory features to improve the number of feasible solutions. 2021-10-27T19:53:05Z 2021-10-27T19:53:05Z 2021 2021-08-11T17:48:00Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/133486 en 10.1016/J.KNOSYS.2021.106836 Knowledge-Based Systems Creative Commons Attribution-NonCommercial-NoDerivs License http://creativecommons.org/licenses/by-nc-nd/4.0/ application/pdf Elsevier BV Other repository
spellingShingle Radaideh, Majdi I
Shirvan, Koroush
Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
title Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
title_full Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
title_fullStr Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
title_full_unstemmed Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
title_short Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
title_sort rule based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
url https://hdl.handle.net/1721.1/133486
work_keys_str_mv AT radaidehmajdii rulebasedreinforcementlearningmethodologytoinformevolutionaryalgorithmsforconstrainedoptimizationofengineeringapplications
AT shirvankoroush rulebasedreinforcementlearningmethodologytoinformevolutionaryalgorithmsforconstrainedoptimizationofengineeringapplications