Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning

© 2020 IEEE. A common approach for defining a reward function for multi-objective reinforcement learning (MORL) problems is the weighted sum of the multiple objectives. The weights are then treated as design parameters dependent on the expertise (and preference) of the person performing the learning...

Full description

Bibliographic Details
Main Authors: Kusari, Arpan, How, Jonathan P.
Format: Article
Language:English
Published: IEEE 2021
Online Access:https://hdl.handle.net/1721.1/136715
_version_ 1811082979247652864
author Kusari, Arpan
How, Jonathan P.
author_facet Kusari, Arpan
How, Jonathan P.
author_sort Kusari, Arpan
collection MIT
description © 2020 IEEE. A common approach for defining a reward function for multi-objective reinforcement learning (MORL) problems is the weighted sum of the multiple objectives. The weights are then treated as design parameters dependent on the expertise (and preference) of the person performing the learning, with the typical result that a new solution is required for any change in these settings. This paper investigates the relationship between the reward function and the optimal value function for MORL; specifically addressing the question of how to approximate the optimal value function well beyond the set of weights for which the optimization problem was actually solved, thereby avoiding the need to recompute for any particular choice. We prove that the value function transforms smoothly given a transformation of weights of the reward function (and thus a smooth interpolation in the policy space). A Gaussian process is used to obtain a smooth interpolation over the reward function weights of the optimal value function for three well-known examples: Gridworld, Objectworld and Pendulum. The results show that the interpolation can provide robust values for sample states and actions in both discrete and continuous domain problems. Significant advantages arise from utilizing this interpolation technique in the domain of autonomous vehicles: easy, instant adaptation of user preferences while driving and true randomization of obstacle vehicle behavior preferences during training.
first_indexed 2024-09-23T12:16:24Z
format Article
id mit-1721.1/136715
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:16:24Z
publishDate 2021
publisher IEEE
record_format dspace
spelling mit-1721.1/1367152021-10-29T03:24:16Z Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning Kusari, Arpan How, Jonathan P. © 2020 IEEE. A common approach for defining a reward function for multi-objective reinforcement learning (MORL) problems is the weighted sum of the multiple objectives. The weights are then treated as design parameters dependent on the expertise (and preference) of the person performing the learning, with the typical result that a new solution is required for any change in these settings. This paper investigates the relationship between the reward function and the optimal value function for MORL; specifically addressing the question of how to approximate the optimal value function well beyond the set of weights for which the optimization problem was actually solved, thereby avoiding the need to recompute for any particular choice. We prove that the value function transforms smoothly given a transformation of weights of the reward function (and thus a smooth interpolation in the policy space). A Gaussian process is used to obtain a smooth interpolation over the reward function weights of the optimal value function for three well-known examples: Gridworld, Objectworld and Pendulum. The results show that the interpolation can provide robust values for sample states and actions in both discrete and continuous domain problems. Significant advantages arise from utilizing this interpolation technique in the domain of autonomous vehicles: easy, instant adaptation of user preferences while driving and true randomization of obstacle vehicle behavior preferences during training. 2021-10-28T15:57:58Z 2021-10-28T15:57:58Z 2020-05 2021-04-30T15:40:07Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/136715 Kusari, Arpan and How, Jonathan P. 2020. "Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning." Proceedings - IEEE International Conference on Robotics and Automation. en 10.1109/icra40945.2020.9197456 Proceedings - IEEE International Conference on Robotics and Automation Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf IEEE arXiv
spellingShingle Kusari, Arpan
How, Jonathan P.
Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
title Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
title_full Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
title_fullStr Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
title_full_unstemmed Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
title_short Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
title_sort predicting optimal value functions by interpolating reward functions in scalarized multi objective reinforcement learning
url https://hdl.handle.net/1721.1/136715
work_keys_str_mv AT kusariarpan predictingoptimalvaluefunctionsbyinterpolatingrewardfunctionsinscalarizedmultiobjectivereinforcementlearning
AT howjonathanp predictingoptimalvaluefunctionsbyinterpolatingrewardfunctionsinscalarizedmultiobjectivereinforcementlearning