Supervised Learning via Unsupervised Sparse Autoencoder

Dimensionality reduction is commonly used to preprocess high-dimensional data, which is an essential step in machine learning and data mining. An outstanding low-dimensional feature can improve the efficiency of subsequent learning tasks. However, existing methods of dimensionality reduction mostly...

Full description

Bibliographic Details
Main Authors: Jianran Liu, Chan Li, Wenyuan Yang
Format: Article
Language:English
Published: IEEE 2018-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8558569/
_version_ 1819163142871056384
author Jianran Liu
Chan Li
Wenyuan Yang
author_facet Jianran Liu
Chan Li
Wenyuan Yang
author_sort Jianran Liu
collection DOAJ
description Dimensionality reduction is commonly used to preprocess high-dimensional data, which is an essential step in machine learning and data mining. An outstanding low-dimensional feature can improve the efficiency of subsequent learning tasks. However, existing methods of dimensionality reduction mostly involve datasets with sufficient labels and fail to achieve effective feature vectors for datasets with insufficient labels. In this paper, an unsupervised multiple layered sparse autoencoder model is studied. Its advantage is that it reduces the reconstruction error as its optimization goal, with the resulting low-dimensional feature being reconstructed to the original dataset as much as possible. Therefore, the reduction of high-dimensional datasets to low-dimensional datasets is effective. First, the relationship among the reconstructed data, the number of iterations, and the number of hidden variables is explored. Second, the dimensionality reduction ability of the sparse autoencoder is proven. Several classical feature representation methods are compared with the sparse autoencoder on publicly available datasets, and the corresponding low-dimensional representations are placed into different supervised classifiers and the classification performances reported. Finally, by adjusting the parameters that might influence the classification performance, the parametric sensitivity of the sparse autoencoder is shown. The extensively low-dimensional feature classification experimental results demonstrated that the sparse autoencoder is more efficient and reliable than the other selected classical dimensional reduction algorithms.
first_indexed 2024-12-22T17:39:26Z
format Article
id doaj.art-b3d938628d9041e997fd31b0edceac81
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-22T17:39:26Z
publishDate 2018-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-b3d938628d9041e997fd31b0edceac812022-12-21T18:18:26ZengIEEEIEEE Access2169-35362018-01-016738027381410.1109/ACCESS.2018.28846978558569Supervised Learning via Unsupervised Sparse AutoencoderJianran Liu0https://orcid.org/0000-0002-5835-7913Chan Li1Wenyuan Yang2Fujian Key Laboratory of Granular Computing and Application, Minnan Normal University, Zhangzhou, ChinaXiamen University Tan KahKee College, Xiamen, ChinaFujian Key Laboratory of Granular Computing and Application, Minnan Normal University, Zhangzhou, ChinaDimensionality reduction is commonly used to preprocess high-dimensional data, which is an essential step in machine learning and data mining. An outstanding low-dimensional feature can improve the efficiency of subsequent learning tasks. However, existing methods of dimensionality reduction mostly involve datasets with sufficient labels and fail to achieve effective feature vectors for datasets with insufficient labels. In this paper, an unsupervised multiple layered sparse autoencoder model is studied. Its advantage is that it reduces the reconstruction error as its optimization goal, with the resulting low-dimensional feature being reconstructed to the original dataset as much as possible. Therefore, the reduction of high-dimensional datasets to low-dimensional datasets is effective. First, the relationship among the reconstructed data, the number of iterations, and the number of hidden variables is explored. Second, the dimensionality reduction ability of the sparse autoencoder is proven. Several classical feature representation methods are compared with the sparse autoencoder on publicly available datasets, and the corresponding low-dimensional representations are placed into different supervised classifiers and the classification performances reported. Finally, by adjusting the parameters that might influence the classification performance, the parametric sensitivity of the sparse autoencoder is shown. The extensively low-dimensional feature classification experimental results demonstrated that the sparse autoencoder is more efficient and reliable than the other selected classical dimensional reduction algorithms.https://ieeexplore.ieee.org/document/8558569/Machine learningdimensionality reductionsparse autoencodersupervised learningfeature representation
spellingShingle Jianran Liu
Chan Li
Wenyuan Yang
Supervised Learning via Unsupervised Sparse Autoencoder
IEEE Access
Machine learning
dimensionality reduction
sparse autoencoder
supervised learning
feature representation
title Supervised Learning via Unsupervised Sparse Autoencoder
title_full Supervised Learning via Unsupervised Sparse Autoencoder
title_fullStr Supervised Learning via Unsupervised Sparse Autoencoder
title_full_unstemmed Supervised Learning via Unsupervised Sparse Autoencoder
title_short Supervised Learning via Unsupervised Sparse Autoencoder
title_sort supervised learning via unsupervised sparse autoencoder
topic Machine learning
dimensionality reduction
sparse autoencoder
supervised learning
feature representation
url https://ieeexplore.ieee.org/document/8558569/
work_keys_str_mv AT jianranliu supervisedlearningviaunsupervisedsparseautoencoder
AT chanli supervisedlearningviaunsupervisedsparseautoencoder
AT wenyuanyang supervisedlearningviaunsupervisedsparseautoencoder