Procedural Learning With Robust Visual Features via Low Rank Prior

In order to apply a convolutional neural network (CNN) to unseen datasets, a common way is to train a CNN using a pre-trained model on a big dataset by fine-tuning it instead of starting from scratch. How to control the fine-tuning progress to get the desired properties is still a challenging proble...

Full description

Bibliographic Details
Main Authors: Haifeng Li, Li Chen, Hailun Ding, Qi Li, Bingyu Sun, Guohua Wu
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8624510/
_version_ 1818427918183825408
author Haifeng Li
Li Chen
Hailun Ding
Qi Li
Bingyu Sun
Guohua Wu
author_facet Haifeng Li
Li Chen
Hailun Ding
Qi Li
Bingyu Sun
Guohua Wu
author_sort Haifeng Li
collection DOAJ
description In order to apply a convolutional neural network (CNN) to unseen datasets, a common way is to train a CNN using a pre-trained model on a big dataset by fine-tuning it instead of starting from scratch. How to control the fine-tuning progress to get the desired properties is still a challenging problem. Our key observation is that the visual features of the pre-trained model have rich information and can be explored during the training process. A natural thought is to employ these features and design a control strategy to improve the performance of the transfer learning process. In this paper, a procedural learning framework using the learned low-rank component of the visual features both in the pre-trained model and the training process is proposed to improve the accuracy and generalizability of the CNN. In this framework, we presented an approach to yield independent visualization features (IVFs). We found via robust independent component analysis that the low-rank components of IVFs provided robust features for our framework. Then, we design a Wasserstein regularization to control the transportation of the distribution of IVFs from a pre-trained model to a final model via the Wasserstein distance. The experiments on the Cifar-10 and Cifar-100 datasets via a VGG-style CNN model showed that our method effectively improves the classification results and convergence speed. The basic idea is that exploring visual features can also potentially inspire other topics, such as image detection and reinforcement learning.
first_indexed 2024-12-14T14:53:21Z
format Article
id doaj.art-6a70e49d4f794c7298fd530cf0fb8ed2
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-14T14:53:21Z
publishDate 2019-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-6a70e49d4f794c7298fd530cf0fb8ed22022-12-21T22:57:04ZengIEEEIEEE Access2169-35362019-01-017188841889310.1109/ACCESS.2019.28948418624510Procedural Learning With Robust Visual Features via Low Rank PriorHaifeng Li0Li Chen1Hailun Ding2Qi Li3Bingyu Sun4Guohua Wu5https://orcid.org/0000-0003-1552-9620School of Geosciences and Info-Physics, Central South University, Changsha, ChinaSchool of Geosciences and Info-Physics, Central South University, Changsha, ChinaSchool of Software, Central South University, Changsha, ChinaSchool of Information Science and Engineering, Central South University, Changsha, ChinaInstitute of Intelligent Machine, Chinese Academy of Sciences, Hefei, ChinaSchool of Traffic and Transportation Engineering, Central South University, Changsha, ChinaIn order to apply a convolutional neural network (CNN) to unseen datasets, a common way is to train a CNN using a pre-trained model on a big dataset by fine-tuning it instead of starting from scratch. How to control the fine-tuning progress to get the desired properties is still a challenging problem. Our key observation is that the visual features of the pre-trained model have rich information and can be explored during the training process. A natural thought is to employ these features and design a control strategy to improve the performance of the transfer learning process. In this paper, a procedural learning framework using the learned low-rank component of the visual features both in the pre-trained model and the training process is proposed to improve the accuracy and generalizability of the CNN. In this framework, we presented an approach to yield independent visualization features (IVFs). We found via robust independent component analysis that the low-rank components of IVFs provided robust features for our framework. Then, we design a Wasserstein regularization to control the transportation of the distribution of IVFs from a pre-trained model to a final model via the Wasserstein distance. The experiments on the Cifar-10 and Cifar-100 datasets via a VGG-style CNN model showed that our method effectively improves the classification results and convergence speed. The basic idea is that exploring visual features can also potentially inspire other topics, such as image detection and reinforcement learning.https://ieeexplore.ieee.org/document/8624510/Low-rank approximationprocedural learningknowledge transferrobustness visual featuresparse
spellingShingle Haifeng Li
Li Chen
Hailun Ding
Qi Li
Bingyu Sun
Guohua Wu
Procedural Learning With Robust Visual Features via Low Rank Prior
IEEE Access
Low-rank approximation
procedural learning
knowledge transfer
robustness visual feature
sparse
title Procedural Learning With Robust Visual Features via Low Rank Prior
title_full Procedural Learning With Robust Visual Features via Low Rank Prior
title_fullStr Procedural Learning With Robust Visual Features via Low Rank Prior
title_full_unstemmed Procedural Learning With Robust Visual Features via Low Rank Prior
title_short Procedural Learning With Robust Visual Features via Low Rank Prior
title_sort procedural learning with robust visual features via low rank prior
topic Low-rank approximation
procedural learning
knowledge transfer
robustness visual feature
sparse
url https://ieeexplore.ieee.org/document/8624510/
work_keys_str_mv AT haifengli procedurallearningwithrobustvisualfeaturesvialowrankprior
AT lichen procedurallearningwithrobustvisualfeaturesvialowrankprior
AT hailunding procedurallearningwithrobustvisualfeaturesvialowrankprior
AT qili procedurallearningwithrobustvisualfeaturesvialowrankprior
AT bingyusun procedurallearningwithrobustvisualfeaturesvialowrankprior
AT guohuawu procedurallearningwithrobustvisualfeaturesvialowrankprior