Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI

Before autonomous systems can be deployed in safety-critical environments, we must be able to verify that they will perform safely, ideally without the risk and expense of real-world testing. A wide variety of formal methods and simulation-driven techniques have been developed to solve this verifica...

Full description

Bibliographic Details
Main Author: Dawson, Charles Burke
Other Authors: Fan, Chuchu
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/155397
https://orcid.org/0000-0002-8371-5313
_version_ 1826201381711642624
author Dawson, Charles Burke
author2 Fan, Chuchu
author_facet Fan, Chuchu
Dawson, Charles Burke
author_sort Dawson, Charles Burke
collection MIT
description Before autonomous systems can be deployed in safety-critical environments, we must be able to verify that they will perform safely, ideally without the risk and expense of real-world testing. A wide variety of formal methods and simulation-driven techniques have been developed to solve this verification problem, but they typically rely on difficult-to-construct mathematical models or else use sample-inefficient black-box optimization methods. Moreover, existing verification methods provide little guidance on how to optimize the system's design to be more robust to the failures they uncover. In this thesis, I develop a suite of methods that accelerate verification and design automation of robots and other autonomous systems by using program analysis tools such as automatic differentiation and probabilistic programming to automatically construct mathematical models of the system under test. In particular, I make the following contributions. First, I use automatic differentiation to develop a flexible, general-purpose framework for end-to-end design automation and statistical safety verification for autonomous systems. Second, I improve the sample efficiency of end-to-end optimization using adversarial optimization to falsify differentiable formal specifications of desired robot behavior. Third, I provide a novel reformulation of the design and verification problem using Bayesian inference to predict a more diverse set of challenging adversarial failure modes. Finally, I present a data-driven method for root-cause failure diagnosis, allowing system designers to infer what factors may have contributed to failure based on noisy data from real-world deployments. I apply the methods developed in this thesis to a range of challenging problems in robotics and cyberphysical systems. I demonstrate the use of this design and verification framework to optimize spacecraft trajectory and control systems, multi-agent formation and communication strategies, vision-in-the-loop controllers for autonomous vehicles, and robust generation dispatch for electrical power systems, and I apply this failure diagnosis tool on real-world data from scheduling failures in a nationwide air transportation network.
first_indexed 2024-09-23T11:50:47Z
format Thesis
id mit-1721.1/155397
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T11:50:47Z
publishDate 2024
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1553972024-06-28T03:35:37Z Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI Dawson, Charles Burke Fan, Chuchu Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Before autonomous systems can be deployed in safety-critical environments, we must be able to verify that they will perform safely, ideally without the risk and expense of real-world testing. A wide variety of formal methods and simulation-driven techniques have been developed to solve this verification problem, but they typically rely on difficult-to-construct mathematical models or else use sample-inefficient black-box optimization methods. Moreover, existing verification methods provide little guidance on how to optimize the system's design to be more robust to the failures they uncover. In this thesis, I develop a suite of methods that accelerate verification and design automation of robots and other autonomous systems by using program analysis tools such as automatic differentiation and probabilistic programming to automatically construct mathematical models of the system under test. In particular, I make the following contributions. First, I use automatic differentiation to develop a flexible, general-purpose framework for end-to-end design automation and statistical safety verification for autonomous systems. Second, I improve the sample efficiency of end-to-end optimization using adversarial optimization to falsify differentiable formal specifications of desired robot behavior. Third, I provide a novel reformulation of the design and verification problem using Bayesian inference to predict a more diverse set of challenging adversarial failure modes. Finally, I present a data-driven method for root-cause failure diagnosis, allowing system designers to infer what factors may have contributed to failure based on noisy data from real-world deployments. I apply the methods developed in this thesis to a range of challenging problems in robotics and cyberphysical systems. I demonstrate the use of this design and verification framework to optimize spacecraft trajectory and control systems, multi-agent formation and communication strategies, vision-in-the-loop controllers for autonomous vehicles, and robust generation dispatch for electrical power systems, and I apply this failure diagnosis tool on real-world data from scheduling failures in a nationwide air transportation network. Ph.D. 2024-06-27T19:50:33Z 2024-06-27T19:50:33Z 2024-05 2024-05-28T19:37:17.418Z Thesis https://hdl.handle.net/1721.1/155397 https://orcid.org/0000-0002-8371-5313 Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Dawson, Charles Burke
Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI
title Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI
title_full Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI
title_fullStr Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI
title_full_unstemmed Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI
title_short Breaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI
title_sort breaking things so you don t have to risk assessment and failure prediction for cyber physical ai
url https://hdl.handle.net/1721.1/155397
https://orcid.org/0000-0002-8371-5313
work_keys_str_mv AT dawsoncharlesburke breakingthingssoyoudonthavetoriskassessmentandfailurepredictionforcyberphysicalai