Improving novelty detection using the reconstructions of nearest neighbours

We show that using nearest neighbours in the latent space of autoencoders (AE) significantly improves performance of semi-supervised novelty detection in both single and multi-class contexts. Autoencoding methods detect novelty by learning to differentiate between the non-novel training class(es) an...

Full description

Bibliographic Details
Main Authors: Michael Mesarcik, Elena Ranguelova, Albert-Jan Boonstra, Rob V. van Nieuwpoort
Format: Article
Language:English
Published: Elsevier 2022-07-01
Series:Array
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2590005622000388
_version_ 1811340755028934656
author Michael Mesarcik
Elena Ranguelova
Albert-Jan Boonstra
Rob V. van Nieuwpoort
author_facet Michael Mesarcik
Elena Ranguelova
Albert-Jan Boonstra
Rob V. van Nieuwpoort
author_sort Michael Mesarcik
collection DOAJ
description We show that using nearest neighbours in the latent space of autoencoders (AE) significantly improves performance of semi-supervised novelty detection in both single and multi-class contexts. Autoencoding methods detect novelty by learning to differentiate between the non-novel training class(es) and all other unseen classes. Our method harnesses a combination of the reconstructions of the nearest neighbours and the latent-neighbour distances of a given input’s latent representation. We demonstrate that our nearest-latent-neighbours (NLN) algorithm is memory and time efficient, does not require significant data augmentation, nor is reliant on pretrained networks. Furthermore, we show that the NLN-algorithm is easily applicable to multiple datasets without modification. Additionally, the proposed algorithm is agnostic to autoencoder architecture and reconstruction error method. We validate our method across several standard datasets for a variety of different autoencoding architectures such as vanilla, adversarial and variational autoencoders using either reconstruction, residual or feature consistent losses. The results show that the NLN algorithm grants up to a 17% increase in Area Under the Receiver Operating Characteristics (AUROC) curve performance for the multi-class case and 8% for single-class novelty detection.
first_indexed 2024-04-13T18:46:37Z
format Article
id doaj.art-c38f33f22dde4311ad78276c5509e674
institution Directory Open Access Journal
issn 2590-0056
language English
last_indexed 2024-04-13T18:46:37Z
publishDate 2022-07-01
publisher Elsevier
record_format Article
series Array
spelling doaj.art-c38f33f22dde4311ad78276c5509e6742022-12-22T02:34:34ZengElsevierArray2590-00562022-07-0114100182Improving novelty detection using the reconstructions of nearest neighboursMichael Mesarcik0Elena Ranguelova1Albert-Jan Boonstra2Rob V. van Nieuwpoort3University of Amsterdam, Science Park 904, Amsterdam, 1098XH, The Netherlands; Corresponding author.eScience Center, Science Park 140, Amsterdam, 1098XG, The NetherlandsASTRON, the Netherlands Institute for Radio Astronomy, Oude Hoogeveensedijk 4, Dwingeloo, 7991PD, The NetherlandsUniversity of Amsterdam, Science Park 904, Amsterdam, 1098XH, The Netherlands; eScience Center, Science Park 140, Amsterdam, 1098XG, The NetherlandsWe show that using nearest neighbours in the latent space of autoencoders (AE) significantly improves performance of semi-supervised novelty detection in both single and multi-class contexts. Autoencoding methods detect novelty by learning to differentiate between the non-novel training class(es) and all other unseen classes. Our method harnesses a combination of the reconstructions of the nearest neighbours and the latent-neighbour distances of a given input’s latent representation. We demonstrate that our nearest-latent-neighbours (NLN) algorithm is memory and time efficient, does not require significant data augmentation, nor is reliant on pretrained networks. Furthermore, we show that the NLN-algorithm is easily applicable to multiple datasets without modification. Additionally, the proposed algorithm is agnostic to autoencoder architecture and reconstruction error method. We validate our method across several standard datasets for a variety of different autoencoding architectures such as vanilla, adversarial and variational autoencoders using either reconstruction, residual or feature consistent losses. The results show that the NLN algorithm grants up to a 17% increase in Area Under the Receiver Operating Characteristics (AUROC) curve performance for the multi-class case and 8% for single-class novelty detection.http://www.sciencedirect.com/science/article/pii/S2590005622000388Anomaly detectionAutoencodersSemi-supervised learning
spellingShingle Michael Mesarcik
Elena Ranguelova
Albert-Jan Boonstra
Rob V. van Nieuwpoort
Improving novelty detection using the reconstructions of nearest neighbours
Array
Anomaly detection
Autoencoders
Semi-supervised learning
title Improving novelty detection using the reconstructions of nearest neighbours
title_full Improving novelty detection using the reconstructions of nearest neighbours
title_fullStr Improving novelty detection using the reconstructions of nearest neighbours
title_full_unstemmed Improving novelty detection using the reconstructions of nearest neighbours
title_short Improving novelty detection using the reconstructions of nearest neighbours
title_sort improving novelty detection using the reconstructions of nearest neighbours
topic Anomaly detection
Autoencoders
Semi-supervised learning
url http://www.sciencedirect.com/science/article/pii/S2590005622000388
work_keys_str_mv AT michaelmesarcik improvingnoveltydetectionusingthereconstructionsofnearestneighbours
AT elenaranguelova improvingnoveltydetectionusingthereconstructionsofnearestneighbours
AT albertjanboonstra improvingnoveltydetectionusingthereconstructionsofnearestneighbours
AT robvvannieuwpoort improvingnoveltydetectionusingthereconstructionsofnearestneighbours