Towards generalising neural implicit representations

Neural implicit representations have shown substantial improvements in efficiently storing 3D data, when compared to conventional formats. However, the focus of existing work has mainly been on storage and subsequent reconstruction. In this work, we show that training neural representations for reco...

Full description

Bibliographic Details
Main Authors: Costain, TW, Prisacariu, VA
Format: Internet publication
Language:English
Published: 2021
_version_ 1811139637369896960
author Costain, TW
Prisacariu, VA
author_facet Costain, TW
Prisacariu, VA
author_sort Costain, TW
collection OXFORD
description Neural implicit representations have shown substantial improvements in efficiently storing 3D data, when compared to conventional formats. However, the focus of existing work has mainly been on storage and subsequent reconstruction. In this work, we show that training neural representations for reconstruction tasks alongside conventional tasks can produce more general encodings that admit equal quality reconstructions to single task training, whilst improving results on conventional tasks when compared to single task encodings. We reformulate the semantic segmentation task, creating a more representative task for implicit representation contexts, and through multi-task experiments on reconstruction, classification, and segmentation, show our approach learns feature rich encodings that admit equal performance for each task.
first_indexed 2024-09-25T04:09:15Z
format Internet publication
id oxford-uuid:a3423889-14e1-439d-a56d-839a69339dbe
institution University of Oxford
language English
last_indexed 2024-09-25T04:09:15Z
publishDate 2021
record_format dspace
spelling oxford-uuid:a3423889-14e1-439d-a56d-839a69339dbe2024-06-11T16:52:21ZTowards generalising neural implicit representationsInternet publicationhttp://purl.org/coar/resource_type/c_7ad9uuid:a3423889-14e1-439d-a56d-839a69339dbeEnglishSymplectic Elements2021Costain, TWPrisacariu, VANeural implicit representations have shown substantial improvements in efficiently storing 3D data, when compared to conventional formats. However, the focus of existing work has mainly been on storage and subsequent reconstruction. In this work, we show that training neural representations for reconstruction tasks alongside conventional tasks can produce more general encodings that admit equal quality reconstructions to single task training, whilst improving results on conventional tasks when compared to single task encodings. We reformulate the semantic segmentation task, creating a more representative task for implicit representation contexts, and through multi-task experiments on reconstruction, classification, and segmentation, show our approach learns feature rich encodings that admit equal performance for each task.
spellingShingle Costain, TW
Prisacariu, VA
Towards generalising neural implicit representations
title Towards generalising neural implicit representations
title_full Towards generalising neural implicit representations
title_fullStr Towards generalising neural implicit representations
title_full_unstemmed Towards generalising neural implicit representations
title_short Towards generalising neural implicit representations
title_sort towards generalising neural implicit representations
work_keys_str_mv AT costaintw towardsgeneralisingneuralimplicitrepresentations
AT prisacariuva towardsgeneralisingneuralimplicitrepresentations