Markov decision processes with unknown state feature values for safe exploration using Gaussian processes
When exploring an unknown environment, a mobile robot must decide where to observe next. It must do this whilst minimising the risk of failure, by only exploring areas that it expects to be safe. In this context, safety refers to the robot remaining in regions where critical environment features (e....
Asıl Yazarlar: | Budd, M, Lacerda, B, Duckworth, P, West, A, Lennox, B, Hawes, N |
---|---|
Materyal Türü: | Conference item |
Dil: | English |
Baskı/Yayın Bilgisi: |
Institute of Electrical and Electronics Engineers
2021
|
Benzer Materyaller
-
Planning under uncertainty for safe robot exploration using Gaussian process prediction
Yazar:: Stephens, A, ve diğerleri
Baskı/Yayın Bilgisi: (2024) -
On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference
Yazar:: Baioumy, M, ve diğerleri
Baskı/Yayın Bilgisi: (2022) -
Bayesian reinforcement learning for single-episode missions in partially unknown environments
Yazar:: Budd, M, ve diğerleri
Baskı/Yayın Bilgisi: (2022) -
Time-bounded mission planning in time-varying domains with semi-MDPS and Gaussian processes
Yazar:: Duckworth, P, ve diğerleri
Baskı/Yayın Bilgisi: (2021) -
Minimax regret optimisation for robust planning in uncertain Markov decision processes
Yazar:: Rigter, M, ve diğerleri
Baskı/Yayın Bilgisi: (2021)