Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised Learning

Deep neural networks (DNNs) can achieve high accuracy when there is abundant training data that has the same distribution as the test data. In practical applications, data deficiency is often a concern. For classification tasks, the lack of enough labeled images in the training set often results in...

Full description

Bibliographic Details
Main Authors: Behnam Gholami, Qingfeng Liu, Mostafa El-Khamy, Jungwon Lee
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9853519/
_version_ 1811319879010091008
author Behnam Gholami
Qingfeng Liu
Mostafa El-Khamy
Jungwon Lee
author_facet Behnam Gholami
Qingfeng Liu
Mostafa El-Khamy
Jungwon Lee
author_sort Behnam Gholami
collection DOAJ
description Deep neural networks (DNNs) can achieve high accuracy when there is abundant training data that has the same distribution as the test data. In practical applications, data deficiency is often a concern. For classification tasks, the lack of enough labeled images in the training set often results in overfitting. Another issue is the mismatch between the training and the test domains, which results in poor model performance. This calls for the need to have robust and data efficient deep learning models. In this work, we propose a deep learning approach called Multi-Expert Adversarial Regularization learning (MEAR) with limited computational overhead to improve the generalization and robustness of deep supervised learning models. The MEAR framework appends multiple classifier heads (experts) to the feature extractor of the legacy model. MEAR aims to learn the feature extractor in an adversarial fashion by leveraging complementary information from the individual experts as well as the ensemble of the experts to be more robust for an unseen test domain. We train state-of-the-art networks with MEAR for two important computer vision tasks, image classification and semantic segmentation. We compare MEAR to a variety of baselines on multiple benchmarks. We show that MEAR is competitive with other methods and more successful at learning robust features.
first_indexed 2024-04-13T12:50:00Z
format Article
id doaj.art-30553d53b0a64e29b573dc36966e739d
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-13T12:50:00Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-30553d53b0a64e29b573dc36966e739d2022-12-22T02:46:15ZengIEEEIEEE Access2169-35362022-01-0110850808509410.1109/ACCESS.2022.31967809853519Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised LearningBehnam Gholami0https://orcid.org/0000-0003-0338-1921Qingfeng Liu1Mostafa El-Khamy2https://orcid.org/0000-0001-9421-6037Jungwon Lee3SOC Cellular and Multimedia Research and Development Laboratory, Samsung Semiconductor Inc., San Diego, CA, USASOC Cellular and Multimedia Research and Development Laboratory, Samsung Semiconductor Inc., San Diego, CA, USASOC Cellular and Multimedia Research and Development Laboratory, Samsung Semiconductor Inc., San Diego, CA, USASOC Cellular and Multimedia Research and Development Laboratory, Samsung Semiconductor Inc., San Diego, CA, USADeep neural networks (DNNs) can achieve high accuracy when there is abundant training data that has the same distribution as the test data. In practical applications, data deficiency is often a concern. For classification tasks, the lack of enough labeled images in the training set often results in overfitting. Another issue is the mismatch between the training and the test domains, which results in poor model performance. This calls for the need to have robust and data efficient deep learning models. In this work, we propose a deep learning approach called Multi-Expert Adversarial Regularization learning (MEAR) with limited computational overhead to improve the generalization and robustness of deep supervised learning models. The MEAR framework appends multiple classifier heads (experts) to the feature extractor of the legacy model. MEAR aims to learn the feature extractor in an adversarial fashion by leveraging complementary information from the individual experts as well as the ensemble of the experts to be more robust for an unseen test domain. We train state-of-the-art networks with MEAR for two important computer vision tasks, image classification and semantic segmentation. We compare MEAR to a variety of baselines on multiple benchmarks. We show that MEAR is competitive with other methods and more successful at learning robust features.https://ieeexplore.ieee.org/document/9853519/Image classificationimage segmentationdata efficient learningrobust learningensemble methodsadversarial learning
spellingShingle Behnam Gholami
Qingfeng Liu
Mostafa El-Khamy
Jungwon Lee
Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised Learning
IEEE Access
Image classification
image segmentation
data efficient learning
robust learning
ensemble methods
adversarial learning
title Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised Learning
title_full Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised Learning
title_fullStr Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised Learning
title_full_unstemmed Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised Learning
title_short Multiexpert Adversarial Regularization for Robust and Data-Efficient Deep Supervised Learning
title_sort multiexpert adversarial regularization for robust and data efficient deep supervised learning
topic Image classification
image segmentation
data efficient learning
robust learning
ensemble methods
adversarial learning
url https://ieeexplore.ieee.org/document/9853519/
work_keys_str_mv AT behnamgholami multiexpertadversarialregularizationforrobustanddataefficientdeepsupervisedlearning
AT qingfengliu multiexpertadversarialregularizationforrobustanddataefficientdeepsupervisedlearning
AT mostafaelkhamy multiexpertadversarialregularizationforrobustanddataefficientdeepsupervisedlearning
AT jungwonlee multiexpertadversarialregularizationforrobustanddataefficientdeepsupervisedlearning