Analysis of robust neural networks for control

<p>The prevalence of neural networks in many application areas is expanding at an increasing rate, with the potential to provide huge benefits across numerous sectors. However, one of the greatest shortcomings of a trained neural network is its sensitivity to adversarial attacks. It is becomin...

Full description

Bibliographic Details
Main Author: Newton, M
Other Authors: Papachristodoulou, A
Format: Thesis
Language:English
Published: 2023
Subjects:
Description
Summary:<p>The prevalence of neural networks in many application areas is expanding at an increasing rate, with the potential to provide huge benefits across numerous sectors. However, one of the greatest shortcomings of a trained neural network is its sensitivity to adversarial attacks. It is becoming clear that providing robust guarantees on systems that use neural networks is very important, especially in safety-critical applications. However, quantifying their safety and robustness properties has proven challenging due to the non-linearities of the activation functions inside the neural network.</p> <p>This thesis addresses this problem from many different perspectives. Firstly, we investigate the sparsity that arises in a recently proposed semidefinite programming framework to verify a fully connected feed-forward neural network. We reformulate and exploit the sparsity in the optimisation problem, showing a significant speed-up in computation. In addition, we approach the problem using polynomial optimisation and show that by using the Positivstellensatz, bounds on the robustness guarantees can be tightened significantly over other popular methods. We then reformulate this approach to simultaneously exploit the sparsity in the problem, whilst improving the accuracy.</p> <p>Neural networks have also seen a recent increased use in feedback control systems. This is primarily because they have the potential to improve the performance of these systems compared to traditional controllers, due to their ability to act as general function approximators. However, since feedback systems are usually subject to external perturbations and neural networks are sensitive to small changes, providing robustness guarantees has proven challenging.</p> <p>In this thesis, we analyse non-linear systems that contain neural network controllers. We first address this problem by computing outer-approximations of the reachable sets using sparse polynomial optimisation. We then use a Sum of Squares programming framework to compute the stability of these systems. Both of these approaches provide better robustness guarantees over existing methods. Finally, we extend these approaches to neural network controllers with rational activation functions. We then propose a method to recover a stabilising controller from a Sum of Squares program and apply it to a modified rational neural network controller.</p>