Summary: | Decision-making problems in robotics necessitate reasoning in the presence of uncertainty. Different problems in robotics often require distinct solutions. For instance, state estimation is commonly achieved by variants of the Bayes Filter, such as Kalman Filters and Particle Filters. On the other hand, planning under uncertainty is modeled using Markov Decision Processes. However, removing domain knowledge, essentially some form of probabilistic inference is being performed in most decision-making tasks. Given some random variables, some evidence, and probabilistic models, the goal is to find likely values for other random variables. In this thesis, we pose the use of probabilistic graphical models (PGMs) to model decision-making problems in robotics. We then leverage existing approaches from probabilistic inference to solve them. This provides a single coherent framework to reason about decision-making. A major contribution of this paper is assessing the utility of modelling decision-making problems as probabilistic inference. Even if it is possible to model a problem as probabilistic inference, is it useful to do so? Our work applies the principles of probabilistic inference to adaptive control, state estimation, fault-tolerant control, and planning under uncertainty. Furthermore, we connect our study with cutting-edge neuroscience research, particularly focusing on the free-energy principle and active inference.
|