Linear convergence of a policy gradient method for some finite horizon continuous time control problems
Despite its popularity in the reinforcement learning community, a provably convergent policy gradient method for continuous space-time control problems with nonlinear state dynamics has been elusive. This paper proposes proximal gradient algorithms for feedback controls of finite-time horizon stocha...
Main Authors: | , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Society for Industrial and Applied Mathematics
2023
|
Summary: | Despite its popularity in the reinforcement learning community, a provably convergent policy
gradient method for continuous space-time control problems with nonlinear state dynamics has been elusive.
This paper proposes proximal gradient algorithms for feedback controls of finite-time horizon stochastic
control problems. The state dynamics are nonlinear diffusions with control-affine drift, and the cost functions are nonconvex in the state and nonsmooth in the control. The system noise can degenerate, which
allows for deterministic control problems as special cases. We prove under suitable conditions that the algorithm converges linearly to a stationary point of the control problem, and is stable with respect to policy
updates by approximate gradient steps. The convergence result justifies the recent reinforcement learning
heuristics that adding entropy regularization or a fictitious discount factor to the optimization objective
accelerates the convergence of policy gradient methods. The proof exploits careful regularity estimates of
backward stochastic differential equations. |
---|