Improve conditional adversarial domain adaptation using self‐training
Abstract Domain adaptation for image classification is one of the most fundamental transfer learning tasks and a promising solution to overcome the annotation burden. Existing deep adversarial domain adaptation approaches imply minimax optimization algorithms, matching the global features across dom...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-08-01
|
Series: | IET Image Processing |
Subjects: | |
Online Access: | https://doi.org/10.1049/ipr2.12184 |
_version_ | 1798035533758201856 |
---|---|
author | Zi Wang Xiaoliang Sun Ang Su Gang Wang Yang Li Qifeng Yu |
author_facet | Zi Wang Xiaoliang Sun Ang Su Gang Wang Yang Li Qifeng Yu |
author_sort | Zi Wang |
collection | DOAJ |
description | Abstract Domain adaptation for image classification is one of the most fundamental transfer learning tasks and a promising solution to overcome the annotation burden. Existing deep adversarial domain adaptation approaches imply minimax optimization algorithms, matching the global features across domains. However, the information conveyed in unlabelled target samples is not fully exploited. Here, adversarial learning and self‐training are unified in an objective function, where the neural network parameters and the pseudo‐labels of target samples are jointly optimized. The model's predictions on unlabelled samples are leveraged to pseudo‐label target samples. The training procedure consists of two alternating steps. The first one is to train the network, while the second is to generate pseudo‐labels, and the loop continues. The proposed method achieves mean accuracy improvements of 2% on Office‐31, 0.7% on ImageCLEF‐DA, 1.8% on Office‐Home, and 1.2% on Digits than the baseline, which is superior to most state‐of‐the‐art approaches. |
first_indexed | 2024-04-11T20:59:31Z |
format | Article |
id | doaj.art-c6da647ee96e4f379bcfaba427d6874b |
institution | Directory Open Access Journal |
issn | 1751-9659 1751-9667 |
language | English |
last_indexed | 2024-04-11T20:59:31Z |
publishDate | 2021-08-01 |
publisher | Wiley |
record_format | Article |
series | IET Image Processing |
spelling | doaj.art-c6da647ee96e4f379bcfaba427d6874b2022-12-22T04:03:34ZengWileyIET Image Processing1751-96591751-96672021-08-0115102169217810.1049/ipr2.12184Improve conditional adversarial domain adaptation using self‐trainingZi Wang0Xiaoliang Sun1Ang Su2Gang Wang3Yang Li4Qifeng Yu5College of Aerospace Science and Engineering National University of Defense Technology Changsha People's Republic of ChinaCollege of Aerospace Science and Engineering National University of Defense Technology Changsha People's Republic of ChinaCollege of Aerospace Science and Engineering National University of Defense Technology Changsha People's Republic of ChinaNational Key Laboratory of Human Factor Engineering China Astronaut Research and Training Centre Beijing People's Republic of ChinaCollege of Aerospace Science and Engineering National University of Defense Technology Changsha People's Republic of ChinaCollege of Aerospace Science and Engineering National University of Defense Technology Changsha People's Republic of ChinaAbstract Domain adaptation for image classification is one of the most fundamental transfer learning tasks and a promising solution to overcome the annotation burden. Existing deep adversarial domain adaptation approaches imply minimax optimization algorithms, matching the global features across domains. However, the information conveyed in unlabelled target samples is not fully exploited. Here, adversarial learning and self‐training are unified in an objective function, where the neural network parameters and the pseudo‐labels of target samples are jointly optimized. The model's predictions on unlabelled samples are leveraged to pseudo‐label target samples. The training procedure consists of two alternating steps. The first one is to train the network, while the second is to generate pseudo‐labels, and the loop continues. The proposed method achieves mean accuracy improvements of 2% on Office‐31, 0.7% on ImageCLEF‐DA, 1.8% on Office‐Home, and 1.2% on Digits than the baseline, which is superior to most state‐of‐the‐art approaches.https://doi.org/10.1049/ipr2.12184Image recognitionOptimisation techniquesComputer vision and image processing techniquesOptimisation techniquesNeural nets |
spellingShingle | Zi Wang Xiaoliang Sun Ang Su Gang Wang Yang Li Qifeng Yu Improve conditional adversarial domain adaptation using self‐training IET Image Processing Image recognition Optimisation techniques Computer vision and image processing techniques Optimisation techniques Neural nets |
title | Improve conditional adversarial domain adaptation using self‐training |
title_full | Improve conditional adversarial domain adaptation using self‐training |
title_fullStr | Improve conditional adversarial domain adaptation using self‐training |
title_full_unstemmed | Improve conditional adversarial domain adaptation using self‐training |
title_short | Improve conditional adversarial domain adaptation using self‐training |
title_sort | improve conditional adversarial domain adaptation using self training |
topic | Image recognition Optimisation techniques Computer vision and image processing techniques Optimisation techniques Neural nets |
url | https://doi.org/10.1049/ipr2.12184 |
work_keys_str_mv | AT ziwang improveconditionaladversarialdomainadaptationusingselftraining AT xiaoliangsun improveconditionaladversarialdomainadaptationusingselftraining AT angsu improveconditionaladversarialdomainadaptationusingselftraining AT gangwang improveconditionaladversarialdomainadaptationusingselftraining AT yangli improveconditionaladversarialdomainadaptationusingselftraining AT qifengyu improveconditionaladversarialdomainadaptationusingselftraining |