6.231 Dynamic Programming and Stochastic Control, Fall 2011
The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite stat...
Main Author: | Bertsekas, Dimitri |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Learning Object |
Language: | en-US |
Published: |
2011
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/101677 |
Similar Items
-
6.231 Dynamic Programming and Stochastic Control, Fall 2008
by: Bertsekas, Dimitri
Published: (2008) -
6.231 Dynamic Programming and Stochastic Control, Fall 2002
by: Bertsekas, Dimitri P.
Published: (2002) -
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems
by: Chen, Tengpeng, et al.
Published: (2018) -
On the Convergence of Stochastic Iterative Dynamic Programming Algorithms
by: Jaakkola, Tommi, et al.
Published: (2004) -
Fully dynamic (2 + epsilon) approximate all-pairs shortest paths with fast query and close to linear update time
by: Bernstein, Aaron
Published: (2010)