Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method

As is well known, the classification performance of large deep neural networks is closely related to the amount of annotated data. However, in practical applications, the quantity of annotated data is minimal for many computer vision tasks, which poses a considerable challenge for deep convolutional...

Full description

Bibliographic Details
Main Authors: Zhiguo Li, Lingbo Li, Xi Xiao, Jinpeng Chen, Nawei Zhang, Sai Li
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10124880/
Description
Summary:As is well known, the classification performance of large deep neural networks is closely related to the amount of annotated data. However, in practical applications, the quantity of annotated data is minimal for many computer vision tasks, which poses a considerable challenge for deep convolutional neural networks that aim to achieve ideal classification performance. This paper proposes a new, fully supervised low-sample image classification model to alleviate the problem of limited marked sample quantity in real life. Specifically, this paper presents a new sample intrinsic consistency loss, which can more effectively update model parameters from a &#x201C;fundamental&#x201D; perspective by exploring the difference between intrinsic sample features and semantic information contained in sample labels. Secondly, a new uncertainty weighting method is proposed to weigh the original supervised loss. It can more effectively learn sample features by weighting sample losses one by one based on their classification status and help the model autonomously understand the importance of different local information. Finally, a sample generation model generates some artificial samples to supplement the limited quantity of actual training samples. The model adjusts parameters through the combined effect of sample intrinsic consistency loss and weighted supervised loss. This paper uses <inline-formula> <tex-math notation="LaTeX">$25 \%$ </tex-math></inline-formula> of the SVHN dataset and <inline-formula> <tex-math notation="LaTeX">$30 \%$ </tex-math></inline-formula> of the CIFAR-10 dataset as training samples to simulate scenarios with limited sample quantities in real life, achieving accuracies of <inline-formula> <tex-math notation="LaTeX">$94.59 \%$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$91.27 \%$ </tex-math></inline-formula> respectively, demonstrating the effectiveness of our method on small real datasets.
ISSN:2169-3536