Volume-weighted Bellman error method for adaptive meshing in approximate dynamic programming
Optimal control and reinforcement learning have an associate “value function” which must be suitably approximated. Value function approximation problems usually have different precision requirements in different regions of the state space. An uniform gridding wastes resources in regions in which the...
Main Authors: | Leopoldo Armesto, Antonio Sala |
---|---|
Format: | Article |
Language: | Spanish |
Published: |
Universitat Politecnica de Valencia
2021-12-01
|
Series: | Revista Iberoamericana de Automática e Informática Industrial RIAI |
Subjects: | |
Online Access: | https://polipapers.upv.es/index.php/RIAI/article/view/15698 |
Similar Items
-
Approximate Dynamic Programming Methodology for Data-based Optimal Controllers
by: Henry Díaz, et al.
Published: (2019-06-01) -
APPROXIMATE BOUNDARY CONTROLLABILITY FOR THE SEMILINEAR HEAT EQUATION
by: Víctor Rafael Cabanillas Zannini
Published: (2014-09-01) -
Los mapas conceptuales en la enseñanza. Viejas técnicas con recursos nuevos.
by: Mateo G. Lezcano Brito, et al.
Published: (2013-04-01) -
Albert Einstein e o falseacionismo de Karl Popper
by: Douglas Antonio Bassani, et al.
Published: (2019-11-01) -
Entropía aproximada del efecto placebo en ensayos clínicos con antidepresivos de nueva generación
by: María Eloisa Cuestas, et al.
Published: (2010-12-01)