Semi-Supervised Boosting Using Similarity Learning Based on Modular Sparse Representation With Marginal Representation Learning of Graph Structure Self-Adaptive

The purpose of semi-supervised boosting strategy is to improve the classification performance of one given classifier for a large number of unlabeled data. In the semi-supervised boosting strategy, the unlabeled samples are assigned for pseudo labels according to similarities between the labeled sam...

Full description

Bibliographic Details
Main Authors: Shu Hua Xu, Fei Gao
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9220775/
Description
Summary:The purpose of semi-supervised boosting strategy is to improve the classification performance of one given classifier for a large number of unlabeled data. In the semi-supervised boosting strategy, the unlabeled samples are assigned for pseudo labels according to similarities between the labeled samples and the unlabeled samples, and the unlabeled samples with high confidences of pseudo labels are selected as labeled samples at the same time. Good similarities help to assign more appropriate pseudo labels to the unlabeled samples. These selected samples with pseudo labels will be used as the labeled samples to train the new ensemble classifier. Therefore, good and distinguishable similarities learning between unlabeled samples and labeled samples has shown remarkable importance due to its promising performance for semi-supervised boosting strategy. This article presents semi-supervised boosting framework using similarity learning based on modular sparse representation by employing a marginal regression function with probabilistic graphical structure adaptation. In this article, distinguishable regression targets analysis, graph structure adaptation, robust modular sparse representation and semi-supervised boosting learning are seamlessly incorporated into a joint framework. This framework learns marginal regression targets from data rather than exploiting the conventional zero-one matrix that greatly hinders the freedom of regression fitness and degrades the performance of regression results to improve the interclass separation of the learned representation. Meanwhile, a regularization term based on probabilistic connection knowledge is used to construct a graph regularization with adaptive optimization, which improves the intra-class compactness of the learned representation. Additionally, modular sparse representation learning is used to improve the robustness of the learned representation. The experimental results on four datasets including face and object show that the recognition rates of the proposed method are significantly better than other state-of-the-art methods.
ISSN:2169-3536