Towards efficient and reliable neural networks

<p>Deep neural networks have achieved remarkable results in various applications, but when deployed to autonomous agents in the real world several challenges arise. Computational resources are limited and navigating through the real world requires low latency. Moreover, for many safety-critica...

Full description

Bibliographic Details
Main Author: de Jorge Aranda, P
Other Authors: Torr, P
Format: Thesis
Language:English
Published: 2024
Subjects:
_version_ 1811140256193314816
author de Jorge Aranda, P
author2 Torr, P
author_facet Torr, P
de Jorge Aranda, P
author_sort de Jorge Aranda, P
collection OXFORD
description <p>Deep neural networks have achieved remarkable results in various applications, but when deployed to autonomous agents in the real world several challenges arise. Computational resources are limited and navigating through the real world requires low latency. Moreover, for many safety-critical applications like autonomous driving, deployed models must be reliable. In this thesis, we tackle these challenges with four main contributions:</p> <p>The first contribution addresses the constraints in computational resources via pruning at initialization. Unlike traditional pruning methods that prune fully trained networks, reducing the number of parameters before training improves memory and computational efficiency both at training and inference time. The proposed pruning mechanism can find trainable sub-networks at much higher sparsity constraints.</p> <p>The second contribution focuses on the reliability concerns raised by the existence of adversarial examples. We analyse single-step adversarial training methods and propose a simple modification to the Fast Gradient Signed Method that allows us to train robust neural networks more efficiently.</p> <p>Shifting the focus to reliability under natural domain shifts, the third contribution conducts a comprehensive study evaluating a large number of models in semantic segmentation. We find that while recent models enjoy increased robustness to distribution shifts, uncertainty estimation does not follow suit. We then identify methods from the literature to improve model reliability out of domain.</p> <p>The fourth contribution focuses on model reliability in the presence of unknown objects. Leveraging recent advancements in text-conditioned image generation, we present a pipeline to automatically add objects into images realistically. We show the resulting images can be used to test and fine-tune anomaly segmentation methods as well as to extend existing datasets to learn new classes.</p>
first_indexed 2024-09-25T04:19:05Z
format Thesis
id oxford-uuid:8f714d59-954b-4386-ac67-ace7bb9b48cd
institution University of Oxford
language English
last_indexed 2024-09-25T04:19:05Z
publishDate 2024
record_format dspace
spelling oxford-uuid:8f714d59-954b-4386-ac67-ace7bb9b48cd2024-07-24T10:45:41ZTowards efficient and reliable neural networksThesishttp://purl.org/coar/resource_type/c_db06uuid:8f714d59-954b-4386-ac67-ace7bb9b48cdDeep learning (Machine learning)ReliabilityEnglishHyrax Deposit2024de Jorge Aranda, PTorr, PDokania, P<p>Deep neural networks have achieved remarkable results in various applications, but when deployed to autonomous agents in the real world several challenges arise. Computational resources are limited and navigating through the real world requires low latency. Moreover, for many safety-critical applications like autonomous driving, deployed models must be reliable. In this thesis, we tackle these challenges with four main contributions:</p> <p>The first contribution addresses the constraints in computational resources via pruning at initialization. Unlike traditional pruning methods that prune fully trained networks, reducing the number of parameters before training improves memory and computational efficiency both at training and inference time. The proposed pruning mechanism can find trainable sub-networks at much higher sparsity constraints.</p> <p>The second contribution focuses on the reliability concerns raised by the existence of adversarial examples. We analyse single-step adversarial training methods and propose a simple modification to the Fast Gradient Signed Method that allows us to train robust neural networks more efficiently.</p> <p>Shifting the focus to reliability under natural domain shifts, the third contribution conducts a comprehensive study evaluating a large number of models in semantic segmentation. We find that while recent models enjoy increased robustness to distribution shifts, uncertainty estimation does not follow suit. We then identify methods from the literature to improve model reliability out of domain.</p> <p>The fourth contribution focuses on model reliability in the presence of unknown objects. Leveraging recent advancements in text-conditioned image generation, we present a pipeline to automatically add objects into images realistically. We show the resulting images can be used to test and fine-tune anomaly segmentation methods as well as to extend existing datasets to learn new classes.</p>
spellingShingle Deep learning (Machine learning)
Reliability
de Jorge Aranda, P
Towards efficient and reliable neural networks
title Towards efficient and reliable neural networks
title_full Towards efficient and reliable neural networks
title_fullStr Towards efficient and reliable neural networks
title_full_unstemmed Towards efficient and reliable neural networks
title_short Towards efficient and reliable neural networks
title_sort towards efficient and reliable neural networks
topic Deep learning (Machine learning)
Reliability
work_keys_str_mv AT dejorgearandap towardsefficientandreliableneuralnetworks