Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification
A well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the l...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-10-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/12/20/10336 |
_version_ | 1797475644260483072 |
---|---|
author | Chengcheng Xiao Xiaowen Liu Chi Sun Zhongyu Liu Enjie Ding |
author_facet | Chengcheng Xiao Xiaowen Liu Chi Sun Zhongyu Liu Enjie Ding |
author_sort | Chengcheng Xiao |
collection | DOAJ |
description | A well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the last layer, which severely attenuates the gradient during the backpropagation process, we propose a hierarchical polynomial kernel prototype loss function in this study. In this function, the addition of a polynomial kernel loss function to multiple stages of the deep neural network effectively enhances the efficiency of gradient return, and only adds multi-layer prototype loss functions in the training stage without increasing the calculation of the inference stage. In addition, the good non-linear expression ability of the polynomial kernel improves the characteristic expression performance of the network. Verification on multiple public datasets shows that the lightweight network trained with the proposed hierarchical polynomial kernel loss function has a higher accuracy than other loss functions. |
first_indexed | 2024-03-09T20:47:59Z |
format | Article |
id | doaj.art-c2a0dcd8c35040a4ac79ef5bf0bb5b48 |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-03-09T20:47:59Z |
publishDate | 2022-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-c2a0dcd8c35040a4ac79ef5bf0bb5b482023-11-23T22:43:01ZengMDPI AGApplied Sciences2076-34172022-10-0112201033610.3390/app122010336Hierarchical Prototypes Polynomial Softmax Loss Function for Visual ClassificationChengcheng Xiao0Xiaowen Liu1Chi Sun2Zhongyu Liu3Enjie Ding4School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, ChinaSchool of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, ChinaSchool of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, ChinaSchool of Information Engineering, Xuzhou University of Technology, Xuzhou 221000, ChinaSchool of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, ChinaA well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the last layer, which severely attenuates the gradient during the backpropagation process, we propose a hierarchical polynomial kernel prototype loss function in this study. In this function, the addition of a polynomial kernel loss function to multiple stages of the deep neural network effectively enhances the efficiency of gradient return, and only adds multi-layer prototype loss functions in the training stage without increasing the calculation of the inference stage. In addition, the good non-linear expression ability of the polynomial kernel improves the characteristic expression performance of the network. Verification on multiple public datasets shows that the lightweight network trained with the proposed hierarchical polynomial kernel loss function has a higher accuracy than other loss functions.https://www.mdpi.com/2076-3417/12/20/10336deep learninglight-weight convolutional neural networksloss functionvisual classification |
spellingShingle | Chengcheng Xiao Xiaowen Liu Chi Sun Zhongyu Liu Enjie Ding Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification Applied Sciences deep learning light-weight convolutional neural networks loss function visual classification |
title | Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification |
title_full | Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification |
title_fullStr | Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification |
title_full_unstemmed | Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification |
title_short | Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification |
title_sort | hierarchical prototypes polynomial softmax loss function for visual classification |
topic | deep learning light-weight convolutional neural networks loss function visual classification |
url | https://www.mdpi.com/2076-3417/12/20/10336 |
work_keys_str_mv | AT chengchengxiao hierarchicalprototypespolynomialsoftmaxlossfunctionforvisualclassification AT xiaowenliu hierarchicalprototypespolynomialsoftmaxlossfunctionforvisualclassification AT chisun hierarchicalprototypespolynomialsoftmaxlossfunctionforvisualclassification AT zhongyuliu hierarchicalprototypespolynomialsoftmaxlossfunctionforvisualclassification AT enjieding hierarchicalprototypespolynomialsoftmaxlossfunctionforvisualclassification |