Saliency guided data augmentation strategy for maximally utilizing an object's visual information.

Among the various types of data augmentation strategies, the mixup-based approach has been particularly studied. However, in existing mixup-based approaches, object loss and label mismatching can occur if random patches are utilized when constructing augmented images, and additionally, patches that...

Full description

Bibliographic Details
Main Authors: Junhyeok An, Soojin Jang, Junehyoung Kwon, Kyohoon Jin, YoungBin Kim
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2022-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0274767
_version_ 1811182186458513408
author Junhyeok An
Soojin Jang
Junehyoung Kwon
Kyohoon Jin
YoungBin Kim
author_facet Junhyeok An
Soojin Jang
Junehyoung Kwon
Kyohoon Jin
YoungBin Kim
author_sort Junhyeok An
collection DOAJ
description Among the various types of data augmentation strategies, the mixup-based approach has been particularly studied. However, in existing mixup-based approaches, object loss and label mismatching can occur if random patches are utilized when constructing augmented images, and additionally, patches that do not contain objects might be included, which degrades performance. In this paper, we propose a novel augmentation method that mixes patches in a non-overlapping manner after they are extracted from the salient regions in an image. The suggested method can make effective use of object characteristics, because the constructed image consists only of visually important regions and is robust to noise. Since the patches do not occlude each other, the semantically meaningful information in the salient regions can be fully utilized. Additionally, our method is more robust to adversarial attack than the conventional augmentation method. In the experimental results, when Wide ResNet was trained on the public datasets, CIFAR-10, CIFAR-100 and STL-10, the top-1 accuracy was 97.26%, 83.99% and 82.40% respectively, which surpasses other augmentation methods.
first_indexed 2024-04-11T09:27:46Z
format Article
id doaj.art-4c9aef747b914205a3f504f8d3199850
institution Directory Open Access Journal
issn 1932-6203
language English
last_indexed 2024-04-11T09:27:46Z
publishDate 2022-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj.art-4c9aef747b914205a3f504f8d31998502022-12-22T04:31:58ZengPublic Library of Science (PLoS)PLoS ONE1932-62032022-01-011710e027476710.1371/journal.pone.0274767Saliency guided data augmentation strategy for maximally utilizing an object's visual information.Junhyeok AnSoojin JangJunehyoung KwonKyohoon JinYoungBin KimAmong the various types of data augmentation strategies, the mixup-based approach has been particularly studied. However, in existing mixup-based approaches, object loss and label mismatching can occur if random patches are utilized when constructing augmented images, and additionally, patches that do not contain objects might be included, which degrades performance. In this paper, we propose a novel augmentation method that mixes patches in a non-overlapping manner after they are extracted from the salient regions in an image. The suggested method can make effective use of object characteristics, because the constructed image consists only of visually important regions and is robust to noise. Since the patches do not occlude each other, the semantically meaningful information in the salient regions can be fully utilized. Additionally, our method is more robust to adversarial attack than the conventional augmentation method. In the experimental results, when Wide ResNet was trained on the public datasets, CIFAR-10, CIFAR-100 and STL-10, the top-1 accuracy was 97.26%, 83.99% and 82.40% respectively, which surpasses other augmentation methods.https://doi.org/10.1371/journal.pone.0274767
spellingShingle Junhyeok An
Soojin Jang
Junehyoung Kwon
Kyohoon Jin
YoungBin Kim
Saliency guided data augmentation strategy for maximally utilizing an object's visual information.
PLoS ONE
title Saliency guided data augmentation strategy for maximally utilizing an object's visual information.
title_full Saliency guided data augmentation strategy for maximally utilizing an object's visual information.
title_fullStr Saliency guided data augmentation strategy for maximally utilizing an object's visual information.
title_full_unstemmed Saliency guided data augmentation strategy for maximally utilizing an object's visual information.
title_short Saliency guided data augmentation strategy for maximally utilizing an object's visual information.
title_sort saliency guided data augmentation strategy for maximally utilizing an object s visual information
url https://doi.org/10.1371/journal.pone.0274767
work_keys_str_mv AT junhyeokan saliencyguideddataaugmentationstrategyformaximallyutilizinganobjectsvisualinformation
AT soojinjang saliencyguideddataaugmentationstrategyformaximallyutilizinganobjectsvisualinformation
AT junehyoungkwon saliencyguideddataaugmentationstrategyformaximallyutilizinganobjectsvisualinformation
AT kyohoonjin saliencyguideddataaugmentationstrategyformaximallyutilizinganobjectsvisualinformation
AT youngbinkim saliencyguideddataaugmentationstrategyformaximallyutilizinganobjectsvisualinformation