Modulating scalable Gaussian processes for expressive statistical learning

For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability. The vanilla GP however is hard to learn complicated distribution with the property of, e.g.,...

Full description

Bibliographic Details
Main Authors: Liu, Haitao, Ong, Yew-Soon, Jiang, Xiaomo, Wang, Xiaofang
Other Authors: School of Computer Science and Engineering
Format: Journal Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162582
_version_ 1826117113204441088
author Liu, Haitao
Ong, Yew-Soon
Jiang, Xiaomo
Wang, Xiaofang
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Liu, Haitao
Ong, Yew-Soon
Jiang, Xiaomo
Wang, Xiaofang
author_sort Liu, Haitao
collection NTU
description For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability. The vanilla GP however is hard to learn complicated distribution with the property of, e.g., heteroscedastic noise, multi-modality and non-stationarity, from massive data due to the Gaussian marginal and the cubic complexity. To this end, this article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation. Particularly, we resort to different variational inference strategies to arrive at analytical or tighter evidence lower bounds (ELBOs) of the marginal likelihood for efficient and effective model training. Extensive numerical experiments against state-of-the-art GP and neural network (NN) counterparts on various tasks verify the superiority of these scalable modulated GPs, especially the scalable latent GP, for learning diverse data distributions.
first_indexed 2024-10-01T04:22:25Z
format Journal Article
id ntu-10356/162582
institution Nanyang Technological University
language English
last_indexed 2024-10-01T04:22:25Z
publishDate 2022
record_format dspace
spelling ntu-10356/1625822022-10-31T05:34:18Z Modulating scalable Gaussian processes for expressive statistical learning Liu, Haitao Ong, Yew-Soon Jiang, Xiaomo Wang, Xiaofang School of Computer Science and Engineering Engineering::Computer science and engineering Gaussian Process Modulation For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability. The vanilla GP however is hard to learn complicated distribution with the property of, e.g., heteroscedastic noise, multi-modality and non-stationarity, from massive data due to the Gaussian marginal and the cubic complexity. To this end, this article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation. Particularly, we resort to different variational inference strategies to arrive at analytical or tighter evidence lower bounds (ELBOs) of the marginal likelihood for efficient and effective model training. Extensive numerical experiments against state-of-the-art GP and neural network (NN) counterparts on various tasks verify the superiority of these scalable modulated GPs, especially the scalable latent GP, for learning diverse data distributions. It was supported by the National Key Research and Development Program of China (2020YFA0714403), the National Natural Science Foundation of China (52005074), and the Fundamental Research Funds for the Central Universities (DUT19RC(3)070). Besides, it was partially supported by the Research and Innovation in Science and Technology Major Project of Liaoning Province (2019JH1-10100024), and the MIIT Marine Welfare Project (Z135060009002). 2022-10-31T05:34:18Z 2022-10-31T05:34:18Z 2021 Journal Article Liu, H., Ong, Y., Jiang, X. & Wang, X. (2021). Modulating scalable Gaussian processes for expressive statistical learning. Pattern Recognition, 120, 108121-. https://dx.doi.org/10.1016/j.patcog.2021.108121 0031-3203 https://hdl.handle.net/10356/162582 10.1016/j.patcog.2021.108121 2-s2.0-85108971042 120 108121 en Pattern Recognition © 2021 Elsevier Ltd. All rights reserved.
spellingShingle Engineering::Computer science and engineering
Gaussian Process
Modulation
Liu, Haitao
Ong, Yew-Soon
Jiang, Xiaomo
Wang, Xiaofang
Modulating scalable Gaussian processes for expressive statistical learning
title Modulating scalable Gaussian processes for expressive statistical learning
title_full Modulating scalable Gaussian processes for expressive statistical learning
title_fullStr Modulating scalable Gaussian processes for expressive statistical learning
title_full_unstemmed Modulating scalable Gaussian processes for expressive statistical learning
title_short Modulating scalable Gaussian processes for expressive statistical learning
title_sort modulating scalable gaussian processes for expressive statistical learning
topic Engineering::Computer science and engineering
Gaussian Process
Modulation
url https://hdl.handle.net/10356/162582
work_keys_str_mv AT liuhaitao modulatingscalablegaussianprocessesforexpressivestatisticallearning
AT ongyewsoon modulatingscalablegaussianprocessesforexpressivestatisticallearning
AT jiangxiaomo modulatingscalablegaussianprocessesforexpressivestatisticallearning
AT wangxiaofang modulatingscalablegaussianprocessesforexpressivestatisticallearning