Self-paced regularization in label distribution learning

Label Distribution Learning is a learning paradigm which outputs a representation of how much each label describes the instance. Research into the paradigm involved machine learning algorithms but did not include deep learning as a possible alternative. Deep learning, a sub-discipline of machine lea...

Full description

Bibliographic Details
Main Author: Koh, Terence Kang Wei
Other Authors: Bo An
Format: Final Year Project (FYP)
Language:English
Published: 2019
Subjects:
Online Access:http://hdl.handle.net/10356/76943
_version_ 1826127072714555392
author Koh, Terence Kang Wei
author2 Bo An
author_facet Bo An
Koh, Terence Kang Wei
author_sort Koh, Terence Kang Wei
collection NTU
description Label Distribution Learning is a learning paradigm which outputs a representation of how much each label describes the instance. Research into the paradigm involved machine learning algorithms but did not include deep learning as a possible alternative. Deep learning, a sub-discipline of machine learning, has seen a surge in popularity over the years. However, the process of training is time-consuming as it requires repetitively iterating over a huge training set. To combat this problem, we propose a combinatory method of self-paced regularization with deep learning, where the deep learning model is presented with training data with progressive levels of size, and by extension, difficulty. Experiment results were logged and compared to the traditional deep learning algorithm, as well as state-of-the-art machine learning algorithms.
first_indexed 2024-10-01T07:02:48Z
format Final Year Project (FYP)
id ntu-10356/76943
institution Nanyang Technological University
language English
last_indexed 2024-10-01T07:02:48Z
publishDate 2019
record_format dspace
spelling ntu-10356/769432023-03-03T20:31:27Z Self-paced regularization in label distribution learning Koh, Terence Kang Wei Bo An School of Computer Science and Engineering DRNTU::Engineering::Computer science and engineering Label Distribution Learning is a learning paradigm which outputs a representation of how much each label describes the instance. Research into the paradigm involved machine learning algorithms but did not include deep learning as a possible alternative. Deep learning, a sub-discipline of machine learning, has seen a surge in popularity over the years. However, the process of training is time-consuming as it requires repetitively iterating over a huge training set. To combat this problem, we propose a combinatory method of self-paced regularization with deep learning, where the deep learning model is presented with training data with progressive levels of size, and by extension, difficulty. Experiment results were logged and compared to the traditional deep learning algorithm, as well as state-of-the-art machine learning algorithms. Bachelor of Engineering (Computer Science) 2019-04-24T14:48:57Z 2019-04-24T14:48:57Z 2019 Final Year Project (FYP) http://hdl.handle.net/10356/76943 en Nanyang Technological University 29 p. application/pdf
spellingShingle DRNTU::Engineering::Computer science and engineering
Koh, Terence Kang Wei
Self-paced regularization in label distribution learning
title Self-paced regularization in label distribution learning
title_full Self-paced regularization in label distribution learning
title_fullStr Self-paced regularization in label distribution learning
title_full_unstemmed Self-paced regularization in label distribution learning
title_short Self-paced regularization in label distribution learning
title_sort self paced regularization in label distribution learning
topic DRNTU::Engineering::Computer science and engineering
url http://hdl.handle.net/10356/76943
work_keys_str_mv AT kohterencekangwei selfpacedregularizationinlabeldistributionlearning