Summary: | <p>The field of machine learning has seen tremendous progress in the last decade, largely due to the advent of deep neural networks. When trained on large-scale labelled datasets, these machine learning algorithms can learn powerful semantic representations directly from the input data, end-to-end. End-to-end learning requires the availability of three core components: useful input data, target outputs, and an objective function for measuring how well the model's predictions match the target outputs. In this thesis, we explore and overcome a series of challenges as related to assembling these three components in the sufficient format and scale for end-to-end learning.</p>
<p>The first key idea presented in this thesis is to learn representations by enabling end-to-end learning for tasks where such challenges exist. We first explore whether better representations can be learnt for the image retrieval task by directly optimising the evaluation metric, Average Precision. This is notoriously challenging task, because such rank-based metrics are non-differentiable. We introduce a simple objective function that optimises a smoothed approximation of Average Precision, termed Smooth-AP, and demonstrate the benefits of training end-to-end over prior approaches. Secondly, we explore whether a representation can be learnt end-to-end for the task of image editing, where target data does not exist in sufficient scale. We propose a self-supervised approach that simulates target data by augmenting off-the-shelf image data, giving remarkable benefits over prior work.</p>
<p>The second idea presented in this thesis is focused on how to use the rich multi-modal signals that are essential for human perceptual systems as input data for deep neural networks. More specifically, we explore the use of audio-visual input data for the human-centric video understanding task. Here, we first explore if highly optimised speaker verification representations can transfer to the domain of movies where humans intentionally disguise their voice. We do this by collecting an audio-visual dataset of humans speaking in movies. Second, given strong identity discriminating representations, we present two methods that harness the complementarity and redundancy between multi-modal signals in order to build robust perceptual systems for determining who is present in a scene. These methods include an automated pipeline for labelling people in unlabelled video archives, and an approach for clustering people by identity in videos.</p>
|