Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders

Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning...

Full description

Bibliographic Details
Main Authors: Wenjun Bai, Changqin Quan, Zhi-Wei Luo
Format: Article
Language:English
Published: MDPI AG 2019-06-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/9/12/2551
_version_ 1811199508872167424
author Wenjun Bai
Changqin Quan
Zhi-Wei Luo
author_facet Wenjun Bai
Changqin Quan
Zhi-Wei Luo
author_sort Wenjun Bai
collection DOAJ
description Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminative modelling performance of a deep generative model. To demonstrate the usage of these learning constraints, we introduce a novel deep generative model: encapsulated variational autoencoders (EVAEs) to stack two different variational autoencoders together with their learning algorithm. Using the MNIST digits dataset as a demonstration, the generative modelling performance of EVAEs was improved with the imposed dependence-minimisation constraint, encouraging our derived deep generative model to produce various patterns of MNIST-like digits. Using CIFAR-10(4K) as an example, a semisupervised EVAE with an imposed regularisation learning constraint was able to achieve competitive discriminative performance on the classification benchmark, even in the face of state-of-the-art semisupervised learning approaches.
first_indexed 2024-04-12T01:49:19Z
format Article
id doaj.art-591a07a59e294b0f894bc4771e2351b8
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-04-12T01:49:19Z
publishDate 2019-06-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-591a07a59e294b0f894bc4771e2351b82022-12-22T03:52:58ZengMDPI AGApplied Sciences2076-34172019-06-01912255110.3390/app9122551app9122551Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational AutoencodersWenjun Bai0Changqin Quan1Zhi-Wei Luo2School of System Informatics, Kobe University, 1-1, Rokkodai-cho, Nada-ku, Kobe 657-8501, JapanSchool of System Informatics, Kobe University, 1-1, Rokkodai-cho, Nada-ku, Kobe 657-8501, JapanSchool of System Informatics, Kobe University, 1-1, Rokkodai-cho, Nada-ku, Kobe 657-8501, JapanLearning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminative modelling performance of a deep generative model. To demonstrate the usage of these learning constraints, we introduce a novel deep generative model: encapsulated variational autoencoders (EVAEs) to stack two different variational autoencoders together with their learning algorithm. Using the MNIST digits dataset as a demonstration, the generative modelling performance of EVAEs was improved with the imposed dependence-minimisation constraint, encouraging our derived deep generative model to produce various patterns of MNIST-like digits. Using CIFAR-10(4K) as an example, a semisupervised EVAE with an imposed regularisation learning constraint was able to achieve competitive discriminative performance on the classification benchmark, even in the face of state-of-the-art semisupervised learning approaches.https://www.mdpi.com/2076-3417/9/12/2551deep generative modellearning constraintrepresentation learning
spellingShingle Wenjun Bai
Changqin Quan
Zhi-Wei Luo
Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
Applied Sciences
deep generative model
learning constraint
representation learning
title Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
title_full Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
title_fullStr Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
title_full_unstemmed Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
title_short Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
title_sort improving generative and discriminative modelling performance by implementing learning constraints in encapsulated variational autoencoders
topic deep generative model
learning constraint
representation learning
url https://www.mdpi.com/2076-3417/9/12/2551
work_keys_str_mv AT wenjunbai improvinggenerativeanddiscriminativemodellingperformancebyimplementinglearningconstraintsinencapsulatedvariationalautoencoders
AT changqinquan improvinggenerativeanddiscriminativemodellingperformancebyimplementinglearningconstraintsinencapsulatedvariationalautoencoders
AT zhiweiluo improvinggenerativeanddiscriminativemodellingperformancebyimplementinglearningconstraintsinencapsulatedvariationalautoencoders