Sammanfattning: | <p>An estimated 400,000 children are born every year with rare genetic disorders that significantly affect their quality of life. Early detection and intervention can significantly improve the quality of life of these children. Craniofacial characteristics contain highly useful information for clinical geneticists for diagnosis. This thesis investigates the use of computer vision to aid in the automatic detection of genetic disorders from ordinary facial photographs. This is a non-trivial task, in part due to patient privacy concerns and the scarcity of training data. In the following, we present several approaches to overcome these challenges.</p>
<p>First, we present a method for creating realistic-looking average faces for individuals sharing a syndrome. These averages remove identifiable features, but retain clinically relevant phenotype information and preserve facial asymmetry. This procedure is completely automated, removing the need to expose patient identities at any point during the process, and could be used to help facilitate facial diagnosis in clinical settings. We also investigate creating transitions between averages and exaggerated caricature faces to highlight phenotype differences between patient groups.</p>
<p>Second, we investigate the classification of eight genetic disorders with shallow and deep representations. We compare shape and appearance descriptors based on local and dense descriptors and report significant improvements upon previous work. Furthermore, we made use of transfer learning and part-based models to train convolutional networks for syndrome classification. Our results show that deep learning can be used in the context of classifying genetic disorders, and is superior to shallow descriptors, despite small training datasets.</p>
<p>Neural networks are prone to learning biases present in training datasets and basing their decisions on them. This is particularly relevant for training on small datasets, as is the case in the domain of genetic disorders. We introduce a bias removal algorithm that aims to overcome this challenge. We report three distinct contributions. First, to ensure that a network is blind to a known bias in the dataset, second, to improve classification performance when faced with an extreme bias, and third, to remove multiple spurious variations from the feature representation of a primary classification task.</p>
<p>Lastly, we introduce a novel image augmentation method for learning a deep face embedding, the “Interpolated Clinical Face Phenotype Space”, that aims to describe clinically relevant face variation. Our contributions are two-fold: 1) Interpolations between faces that share a class improve deep representation training from small datasets. 2) Between-class interpolations that model the space between classes improve the generalisation performance of the deep representation to unseen syndromes.</p>
|