GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks

We propose a novel approach for generating high-quality, synthetic data for domain-specific learning tasks, for which training data may not be readily available. We leverage recent progress in image-to-image translation to bridge the gap between simulated and real images, allowing us to generate rea...

Full description

Bibliographic Details
Main Authors: Stein, Gregory Joseph, Roy, Nicholas
Other Authors: Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2020
Online Access:https://hdl.handle.net/1721.1/125861
Description
Summary:We propose a novel approach for generating high-quality, synthetic data for domain-specific learning tasks, for which training data may not be readily available. We leverage recent progress in image-to-image translation to bridge the gap between simulated and real images, allowing us to generate realistic training data for real-world tasks using only unlabeled real-world images and a simulation. GeneSIS-Rtameliorates the burden of having to collect labeled real-world images and is a promising candidate for generating high-quality, domain-specific, synthetic data. To show the effectiveness of using GeneSIS-Rtto create training data, we study two tasks: semantic segmentation and reactive obstacle avoidance. We demonstrate that learning algorithms trained using data generated by GeneSIS-RT make high-accuracy predictions and outperform systems trained on raw simulated data alone, and as well or better than those trained on real data. Finally, we use our data to train a quadcopter to fly 60 meters at speeds up to 3.4 m/s through a cluttered environment, demonstrating that our GeneSIS-RT images can be used to learn to perform mission-critical tasks.