Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method
As is well known, the classification performance of large deep neural networks is closely related to the amount of annotated data. However, in practical applications, the quantity of annotated data is minimal for many computer vision tasks, which poses a considerable challenge for deep convolutional...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10124880/ |
_version_ | 1797820007408730112 |
---|---|
author | Zhiguo Li Lingbo Li Xi Xiao Jinpeng Chen Nawei Zhang Sai Li |
author_facet | Zhiguo Li Lingbo Li Xi Xiao Jinpeng Chen Nawei Zhang Sai Li |
author_sort | Zhiguo Li |
collection | DOAJ |
description | As is well known, the classification performance of large deep neural networks is closely related to the amount of annotated data. However, in practical applications, the quantity of annotated data is minimal for many computer vision tasks, which poses a considerable challenge for deep convolutional neural networks that aim to achieve ideal classification performance. This paper proposes a new, fully supervised low-sample image classification model to alleviate the problem of limited marked sample quantity in real life. Specifically, this paper presents a new sample intrinsic consistency loss, which can more effectively update model parameters from a “fundamental” perspective by exploring the difference between intrinsic sample features and semantic information contained in sample labels. Secondly, a new uncertainty weighting method is proposed to weigh the original supervised loss. It can more effectively learn sample features by weighting sample losses one by one based on their classification status and help the model autonomously understand the importance of different local information. Finally, a sample generation model generates some artificial samples to supplement the limited quantity of actual training samples. The model adjusts parameters through the combined effect of sample intrinsic consistency loss and weighted supervised loss. This paper uses <inline-formula> <tex-math notation="LaTeX">$25 \%$ </tex-math></inline-formula> of the SVHN dataset and <inline-formula> <tex-math notation="LaTeX">$30 \%$ </tex-math></inline-formula> of the CIFAR-10 dataset as training samples to simulate scenarios with limited sample quantities in real life, achieving accuracies of <inline-formula> <tex-math notation="LaTeX">$94.59 \%$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$91.27 \%$ </tex-math></inline-formula> respectively, demonstrating the effectiveness of our method on small real datasets. |
first_indexed | 2024-03-13T09:31:06Z |
format | Article |
id | doaj.art-1f3e437e84754694a4e3830aca175eec |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-13T09:31:06Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-1f3e437e84754694a4e3830aca175eec2023-05-25T23:00:42ZengIEEEIEEE Access2169-35362023-01-0111490594907010.1109/ACCESS.2023.327687510124880Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting MethodZhiguo Li0Lingbo Li1Xi Xiao2Jinpeng Chen3Nawei Zhang4Sai Li5https://orcid.org/0009-0007-4704-8254Information Construction and Service Center, Neijiang Normal University, Neijiang, ChinaLibrary of Information Center, Zhejiang Technical Institute of Economics, Hangzhou, ChinaSchool of Computer Science, Southwest Petroleum University, Chengdu, ChinaKhoury College of Computer Sciences, Northeastern University, Boston, MA, USACollege of Information Science and Engineering, China University of Petroleum, Beijing, ChinaCollege of Mechanical and Electrical Engineering, Zaozhuang University, Zaozhuang, ChinaAs is well known, the classification performance of large deep neural networks is closely related to the amount of annotated data. However, in practical applications, the quantity of annotated data is minimal for many computer vision tasks, which poses a considerable challenge for deep convolutional neural networks that aim to achieve ideal classification performance. This paper proposes a new, fully supervised low-sample image classification model to alleviate the problem of limited marked sample quantity in real life. Specifically, this paper presents a new sample intrinsic consistency loss, which can more effectively update model parameters from a “fundamental” perspective by exploring the difference between intrinsic sample features and semantic information contained in sample labels. Secondly, a new uncertainty weighting method is proposed to weigh the original supervised loss. It can more effectively learn sample features by weighting sample losses one by one based on their classification status and help the model autonomously understand the importance of different local information. Finally, a sample generation model generates some artificial samples to supplement the limited quantity of actual training samples. The model adjusts parameters through the combined effect of sample intrinsic consistency loss and weighted supervised loss. This paper uses <inline-formula> <tex-math notation="LaTeX">$25 \%$ </tex-math></inline-formula> of the SVHN dataset and <inline-formula> <tex-math notation="LaTeX">$30 \%$ </tex-math></inline-formula> of the CIFAR-10 dataset as training samples to simulate scenarios with limited sample quantities in real life, achieving accuracies of <inline-formula> <tex-math notation="LaTeX">$94.59 \%$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$91.27 \%$ </tex-math></inline-formula> respectively, demonstrating the effectiveness of our method on small real datasets.https://ieeexplore.ieee.org/document/10124880/Low-sample image classificationdeep convolutional neural networksample intrinsic consistency lossuncertainty weighting methodimage generation model |
spellingShingle | Zhiguo Li Lingbo Li Xi Xiao Jinpeng Chen Nawei Zhang Sai Li Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method IEEE Access Low-sample image classification deep convolutional neural network sample intrinsic consistency loss uncertainty weighting method image generation model |
title | Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method |
title_full | Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method |
title_fullStr | Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method |
title_full_unstemmed | Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method |
title_short | Low-Sample Image Classification Based on Intrinsic Consistency Loss and Uncertainty Weighting Method |
title_sort | low sample image classification based on intrinsic consistency loss and uncertainty weighting method |
topic | Low-sample image classification deep convolutional neural network sample intrinsic consistency loss uncertainty weighting method image generation model |
url | https://ieeexplore.ieee.org/document/10124880/ |
work_keys_str_mv | AT zhiguoli lowsampleimageclassificationbasedonintrinsicconsistencylossanduncertaintyweightingmethod AT lingboli lowsampleimageclassificationbasedonintrinsicconsistencylossanduncertaintyweightingmethod AT xixiao lowsampleimageclassificationbasedonintrinsicconsistencylossanduncertaintyweightingmethod AT jinpengchen lowsampleimageclassificationbasedonintrinsicconsistencylossanduncertaintyweightingmethod AT naweizhang lowsampleimageclassificationbasedonintrinsicconsistencylossanduncertaintyweightingmethod AT saili lowsampleimageclassificationbasedonintrinsicconsistencylossanduncertaintyweightingmethod |