Improve conditional adversarial domain adaptation using self‐training

Abstract Domain adaptation for image classification is one of the most fundamental transfer learning tasks and a promising solution to overcome the annotation burden. Existing deep adversarial domain adaptation approaches imply minimax optimization algorithms, matching the global features across dom...

Full description

Bibliographic Details
Main Authors: Zi Wang, Xiaoliang Sun, Ang Su, Gang Wang, Yang Li, Qifeng Yu
Format: Article
Language:English
Published: Wiley 2021-08-01
Series:IET Image Processing
Subjects:
Online Access:https://doi.org/10.1049/ipr2.12184
Description
Summary:Abstract Domain adaptation for image classification is one of the most fundamental transfer learning tasks and a promising solution to overcome the annotation burden. Existing deep adversarial domain adaptation approaches imply minimax optimization algorithms, matching the global features across domains. However, the information conveyed in unlabelled target samples is not fully exploited. Here, adversarial learning and self‐training are unified in an objective function, where the neural network parameters and the pseudo‐labels of target samples are jointly optimized. The model's predictions on unlabelled samples are leveraged to pseudo‐label target samples. The training procedure consists of two alternating steps. The first one is to train the network, while the second is to generate pseudo‐labels, and the loop continues. The proposed method achieves mean accuracy improvements of 2% on Office‐31, 0.7% on ImageCLEF‐DA, 1.8% on Office‐Home, and 1.2% on Digits than the baseline, which is superior to most state‐of‐the‐art approaches.
ISSN:1751-9659
1751-9667