A minimalist approach to deep multi-task learning

<p>Multi-task learning is critical for real-life applications of machine learning. Modern approaches are characterised by algorithmic complexity, often unjustified, leading to impractical solutions. In contrast, this thesis demonstrates that a minimalistic alternative is possible, showing the...

Full description

Bibliographic Details
Main Author: Kurin, V
Other Authors: Whiteson, S
Format: Thesis
Language:English
Published: 2022
Subjects:
_version_ 1826316007112704000
author Kurin, V
author2 Whiteson, S
author_facet Whiteson, S
Kurin, V
author_sort Kurin, V
collection OXFORD
description <p>Multi-task learning is critical for real-life applications of machine learning. Modern approaches are characterised by algorithmic complexity, often unjustified, leading to impractical solutions. In contrast, this thesis demonstrates that a minimalistic alternative is possible, showing the attractiveness of simple methods. '<i>In defence of the Unitary Scalarisation for Deep Multi-task Learning</i>' motivates the rest of the thesis, showing that none of the more complex multi-task optimisers outperforms the simple per-task gradient summation when compared on fair grounds. Furthermore, it proposes a novel look at multi-task optimisers from the regularisation standpoint. The rest of this thesis focuses on deep reinforcement learning, a general framework for sequential decision-making. In particular, we look at the setting when observations (inputs to the model) are represented as graphs, i.e., collections of interconnected nodes. In '<i>Scaling GNNs to High-Dimensional Continuous Control</i>' and '<i>The Role of Morphology in Graph-Based Incompatible Control</i>', we learn a single control policy for agents of different morphology by representing the observation set elements as graphs and deploy graph neural networks (including transformers). In the former chapter, we devise a simple method to scale graph networks by freezing some parts of the network to stabilise learning and prevent overfitting. In the latter chapter, we show that graph connectivity might be suboptimal for the downstream task demonstrating that less-constrained transformers perform significantly better without having the graph connectivity information. Finally, in the '<i>Generalisable Branching Heuristic for a SAT Solver</i>', we apply multi-task reinforcement learning to Boolean satisfiability, a fundamental problem in academia and industrial applications. We demonstrate that Q-learning, a staple reinforcement learning algorithm equipped with graph neural networks for function approximation, can learn a generalisable branching heuristic.</p> <p>We hope our findings will steer the further development of the field: creating more complex benchmarks, adding assumptions on task similarities and a model capacity, and exploring other objective functions rather than focusing on the average performance across the tasks.</p>
first_indexed 2024-03-07T07:45:04Z
format Thesis
id oxford-uuid:291e12e6-6cb0-4244-8a6f-d350cef9b20f
institution University of Oxford
language English
last_indexed 2024-12-09T03:36:45Z
publishDate 2022
record_format dspace
spelling oxford-uuid:291e12e6-6cb0-4244-8a6f-d350cef9b20f2024-12-01T20:02:42ZA minimalist approach to deep multi-task learningThesishttp://purl.org/coar/resource_type/c_db06uuid:291e12e6-6cb0-4244-8a6f-d350cef9b20fNeural networks (Computer science)Reinforcement learningDeep learning (Machine learning)EnglishHyrax Deposit2022Kurin, VWhiteson, S<p>Multi-task learning is critical for real-life applications of machine learning. Modern approaches are characterised by algorithmic complexity, often unjustified, leading to impractical solutions. In contrast, this thesis demonstrates that a minimalistic alternative is possible, showing the attractiveness of simple methods. '<i>In defence of the Unitary Scalarisation for Deep Multi-task Learning</i>' motivates the rest of the thesis, showing that none of the more complex multi-task optimisers outperforms the simple per-task gradient summation when compared on fair grounds. Furthermore, it proposes a novel look at multi-task optimisers from the regularisation standpoint. The rest of this thesis focuses on deep reinforcement learning, a general framework for sequential decision-making. In particular, we look at the setting when observations (inputs to the model) are represented as graphs, i.e., collections of interconnected nodes. In '<i>Scaling GNNs to High-Dimensional Continuous Control</i>' and '<i>The Role of Morphology in Graph-Based Incompatible Control</i>', we learn a single control policy for agents of different morphology by representing the observation set elements as graphs and deploy graph neural networks (including transformers). In the former chapter, we devise a simple method to scale graph networks by freezing some parts of the network to stabilise learning and prevent overfitting. In the latter chapter, we show that graph connectivity might be suboptimal for the downstream task demonstrating that less-constrained transformers perform significantly better without having the graph connectivity information. Finally, in the '<i>Generalisable Branching Heuristic for a SAT Solver</i>', we apply multi-task reinforcement learning to Boolean satisfiability, a fundamental problem in academia and industrial applications. We demonstrate that Q-learning, a staple reinforcement learning algorithm equipped with graph neural networks for function approximation, can learn a generalisable branching heuristic.</p> <p>We hope our findings will steer the further development of the field: creating more complex benchmarks, adding assumptions on task similarities and a model capacity, and exploring other objective functions rather than focusing on the average performance across the tasks.</p>
spellingShingle Neural networks (Computer science)
Reinforcement learning
Deep learning (Machine learning)
Kurin, V
A minimalist approach to deep multi-task learning
title A minimalist approach to deep multi-task learning
title_full A minimalist approach to deep multi-task learning
title_fullStr A minimalist approach to deep multi-task learning
title_full_unstemmed A minimalist approach to deep multi-task learning
title_short A minimalist approach to deep multi-task learning
title_sort minimalist approach to deep multi task learning
topic Neural networks (Computer science)
Reinforcement learning
Deep learning (Machine learning)
work_keys_str_mv AT kurinv aminimalistapproachtodeepmultitasklearning
AT kurinv minimalistapproachtodeepmultitasklearning