Using High-Performance Computing to Scale Generative Adversarial Networks

Generative adversarial networks(GANs) are methods that can be used for data augmentation, which helps in creating better detection models for rare or imbalanced datasets. They can be difficult to train due to issues such as mode collapse. We aim to improve the performance and accuracy of the Lipizza...

Full description

Bibliographic Details
Main Author: Flores, Diana J.
Other Authors: Hemberg, Erik
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/139311
Description
Summary:Generative adversarial networks(GANs) are methods that can be used for data augmentation, which helps in creating better detection models for rare or imbalanced datasets. They can be difficult to train due to issues such as mode collapse. We aim to improve the performance and accuracy of the Lipizzaner GAN framework by taking advantage of its distributed nature and running it at very large scales. Lipizzaner was implemented for robustness, but has not been tested at scale in high performance computing(HPC) systems. We believe that by utilizing HPC technologies, we can scale up Lipizzaner and observe performance enhancements. This thesis achieves this scale up, using Oak Ridge National Labs’ Summit Supercomputer. We observed improvements in the performance of Lipizzaner, especially when run with poorer network architectures, which implies Lipizzaner is able to overcome network limitations through scale.