Probabilistic reach-avoid for Bayesian neural networks
Model-based reinforcement learning seeks to simultaneously learn the dynamics of an unknown stochastic environment and synthesise an optimal policy for acting in it. Ensuring the safety and robustness of sequential decisions made through a policy in such an environment is a key challenge for policie...
Main Authors: | Wicker, M, Laurenti, L, Patane, A, Paoletti, N, Abate, A, Kwiatkowska, M |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Elsevier
2024
|
Similar Items
-
Probabilistic safety for bayesian neural networks
by: Wicker, M, et al.
Published: (2020) -
Certification of iterative predictions in Bayesian neural networks
by: Wicker, M, et al.
Published: (2021) -
Statistical guarantees for the robustness of Bayesian neural networks
by: Cardelli, L, et al.
Published: (2019) -
Adversarial robustness certification for Bayesian neural networks
by: Wicker, M, et al.
Published: (2024) -
Robustness of Bayesian neural networks to gradient-based attacks
by: Carbone, G, et al.
Published: (2020)