GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks

We propose a novel approach for generating high-quality, synthetic data for domain-specific learning tasks, for which training data may not be readily available. We leverage recent progress in image-to-image translation to bridge the gap between simulated and real images, allowing us to generate rea...

Full description

Bibliographic Details
Main Authors: Stein, Gregory Joseph, Roy, Nicholas
Other Authors: Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2020
Online Access:https://hdl.handle.net/1721.1/125861
_version_ 1826206524294299648
author Stein, Gregory Joseph
Roy, Nicholas
author2 Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
author_facet Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Stein, Gregory Joseph
Roy, Nicholas
author_sort Stein, Gregory Joseph
collection MIT
description We propose a novel approach for generating high-quality, synthetic data for domain-specific learning tasks, for which training data may not be readily available. We leverage recent progress in image-to-image translation to bridge the gap between simulated and real images, allowing us to generate realistic training data for real-world tasks using only unlabeled real-world images and a simulation. GeneSIS-Rtameliorates the burden of having to collect labeled real-world images and is a promising candidate for generating high-quality, domain-specific, synthetic data. To show the effectiveness of using GeneSIS-Rtto create training data, we study two tasks: semantic segmentation and reactive obstacle avoidance. We demonstrate that learning algorithms trained using data generated by GeneSIS-RT make high-accuracy predictions and outperform systems trained on raw simulated data alone, and as well or better than those trained on real data. Finally, we use our data to train a quadcopter to fly 60 meters at speeds up to 3.4 m/s through a cluttered environment, demonstrating that our GeneSIS-RT images can be used to learn to perform mission-critical tasks.
first_indexed 2024-09-23T13:34:18Z
format Article
id mit-1721.1/125861
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:34:18Z
publishDate 2020
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1258612022-10-01T15:44:14Z GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks Stein, Gregory Joseph Roy, Nicholas Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science We propose a novel approach for generating high-quality, synthetic data for domain-specific learning tasks, for which training data may not be readily available. We leverage recent progress in image-to-image translation to bridge the gap between simulated and real images, allowing us to generate realistic training data for real-world tasks using only unlabeled real-world images and a simulation. GeneSIS-Rtameliorates the burden of having to collect labeled real-world images and is a promising candidate for generating high-quality, domain-specific, synthetic data. To show the effectiveness of using GeneSIS-Rtto create training data, we study two tasks: semantic segmentation and reactive obstacle avoidance. We demonstrate that learning algorithms trained using data generated by GeneSIS-RT make high-accuracy predictions and outperform systems trained on raw simulated data alone, and as well or better than those trained on real data. Finally, we use our data to train a quadcopter to fly 60 meters at speeds up to 3.4 m/s through a cluttered environment, demonstrating that our GeneSIS-RT images can be used to learn to perform mission-critical tasks. Defense Advanced Research Project Agency (DARPA) (Contract HR0011-15-C-0110). 2020-06-18T13:57:41Z 2020-06-18T13:57:41Z 2018-09 2018-05 2019-10-31T13:22:29Z Article http://purl.org/eprint/type/ConferencePaper 9781538630815 2577-087X https://hdl.handle.net/1721.1/125861 Stein, Gregory J. and Nicholas Roy. "GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks, IEEE International Conference on Robotics and Automation (ICRA), May 2018, Brisbane, QLD, Australia, Institute of Electrical and Electronics Engineers, September 2018. © 2018 IEEE en http://dx.doi.org/10.1109/icra.2018.8462971 IEEE International Conference on Robotics and Automation (ICRA) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv
spellingShingle Stein, Gregory Joseph
Roy, Nicholas
GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks
title GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks
title_full GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks
title_fullStr GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks
title_full_unstemmed GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks
title_short GeneSIS-Rt: Generating Synthetic Images for Training Secondary Real-World Tasks
title_sort genesis rt generating synthetic images for training secondary real world tasks
url https://hdl.handle.net/1721.1/125861
work_keys_str_mv AT steingregoryjoseph genesisrtgeneratingsyntheticimagesfortrainingsecondaryrealworldtasks
AT roynicholas genesisrtgeneratingsyntheticimagesfortrainingsecondaryrealworldtasks