Multimodal sensor fusion in the latent representation space

Abstract A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search man...

Full description

Bibliographic Details
Main Authors: Robert J. Piechocki, Xiaoyang Wang, Mohammud J. Bocus
Format: Article
Language:English
Published: Nature Portfolio 2023-02-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-022-24754-w
_version_ 1828041681186848768
author Robert J. Piechocki
Xiaoyang Wang
Mohammud J. Bocus
author_facet Robert J. Piechocki
Xiaoyang Wang
Mohammud J. Bocus
author_sort Robert J. Piechocki
collection DOAJ
description Abstract A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.
first_indexed 2024-04-10T17:19:52Z
format Article
id doaj.art-d6cfaaaa2c624672beb95dd2ab382718
institution Directory Open Access Journal
issn 2045-2322
language English
last_indexed 2024-04-10T17:19:52Z
publishDate 2023-02-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj.art-d6cfaaaa2c624672beb95dd2ab3827182023-02-05T12:10:02ZengNature PortfolioScientific Reports2045-23222023-02-0113111010.1038/s41598-022-24754-wMultimodal sensor fusion in the latent representation spaceRobert J. Piechocki0Xiaoyang Wang1Mohammud J. Bocus2School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of BristolSchool of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of BristolSchool of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of BristolAbstract A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.https://doi.org/10.1038/s41598-022-24754-w
spellingShingle Robert J. Piechocki
Xiaoyang Wang
Mohammud J. Bocus
Multimodal sensor fusion in the latent representation space
Scientific Reports
title Multimodal sensor fusion in the latent representation space
title_full Multimodal sensor fusion in the latent representation space
title_fullStr Multimodal sensor fusion in the latent representation space
title_full_unstemmed Multimodal sensor fusion in the latent representation space
title_short Multimodal sensor fusion in the latent representation space
title_sort multimodal sensor fusion in the latent representation space
url https://doi.org/10.1038/s41598-022-24754-w
work_keys_str_mv AT robertjpiechocki multimodalsensorfusioninthelatentrepresentationspace
AT xiaoyangwang multimodalsensorfusioninthelatentrepresentationspace
AT mohammudjbocus multimodalsensorfusioninthelatentrepresentationspace