Pay attention and you won’t lose it: a deep learning approach to sequence imputation
In most areas of machine learning, it is assumed that data quality is fairly consistent between training and inference. Unfortunately, in real systems, data are plagued by noise, loss, and various other quality reducing factors. While a number of deep learning algorithms solve end-stage problems of...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
PeerJ Inc.
2019-08-01
|
Series: | PeerJ Computer Science |
Subjects: | |
Online Access: | https://peerj.com/articles/cs-210.pdf |
_version_ | 1818943691809619968 |
---|---|
author | Ilia Sucholutsky Apurva Narayan Matthias Schonlau Sebastian Fischmeister |
author_facet | Ilia Sucholutsky Apurva Narayan Matthias Schonlau Sebastian Fischmeister |
author_sort | Ilia Sucholutsky |
collection | DOAJ |
description | In most areas of machine learning, it is assumed that data quality is fairly consistent between training and inference. Unfortunately, in real systems, data are plagued by noise, loss, and various other quality reducing factors. While a number of deep learning algorithms solve end-stage problems of prediction and classification, very few aim to solve the intermediate problems of data pre-processing, cleaning, and restoration. Long Short-Term Memory (LSTM) networks have previously been proposed as a solution for data restoration, but they suffer from a major bottleneck: a large number of sequential operations. We propose using attention mechanisms to entirely replace the recurrent components of these data-restoration networks. We demonstrate that such an approach leads to reduced model sizes by as many as two orders of magnitude, a 2-fold to 4-fold reduction in training times, and 95% accuracy for automotive data restoration. We also show in a case study that this approach improves the performance of downstream algorithms reliant on clean data. |
first_indexed | 2024-12-20T07:31:21Z |
format | Article |
id | doaj.art-9b9f0cc821824447bb607b975e906948 |
institution | Directory Open Access Journal |
issn | 2376-5992 |
language | English |
last_indexed | 2024-12-20T07:31:21Z |
publishDate | 2019-08-01 |
publisher | PeerJ Inc. |
record_format | Article |
series | PeerJ Computer Science |
spelling | doaj.art-9b9f0cc821824447bb607b975e9069482022-12-21T19:48:24ZengPeerJ Inc.PeerJ Computer Science2376-59922019-08-015e21010.7717/peerj-cs.210Pay attention and you won’t lose it: a deep learning approach to sequence imputationIlia Sucholutsky0Apurva Narayan1Matthias Schonlau2Sebastian Fischmeister3Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, Ontario, CanadaDepartment of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, CanadaDepartment of Statistics and Actuarial Science, University of Waterloo, Waterloo, Ontario, CanadaDepartment of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, CanadaIn most areas of machine learning, it is assumed that data quality is fairly consistent between training and inference. Unfortunately, in real systems, data are plagued by noise, loss, and various other quality reducing factors. While a number of deep learning algorithms solve end-stage problems of prediction and classification, very few aim to solve the intermediate problems of data pre-processing, cleaning, and restoration. Long Short-Term Memory (LSTM) networks have previously been proposed as a solution for data restoration, but they suffer from a major bottleneck: a large number of sequential operations. We propose using attention mechanisms to entirely replace the recurrent components of these data-restoration networks. We demonstrate that such an approach leads to reduced model sizes by as many as two orders of magnitude, a 2-fold to 4-fold reduction in training times, and 95% accuracy for automotive data restoration. We also show in a case study that this approach improves the performance of downstream algorithms reliant on clean data.https://peerj.com/articles/cs-210.pdfDeep learningData restorationSequence modellingData lossAttentionmechanismsAutomotive data |
spellingShingle | Ilia Sucholutsky Apurva Narayan Matthias Schonlau Sebastian Fischmeister Pay attention and you won’t lose it: a deep learning approach to sequence imputation PeerJ Computer Science Deep learning Data restoration Sequence modelling Data loss Attentionmechanisms Automotive data |
title | Pay attention and you won’t lose it: a deep learning approach to sequence imputation |
title_full | Pay attention and you won’t lose it: a deep learning approach to sequence imputation |
title_fullStr | Pay attention and you won’t lose it: a deep learning approach to sequence imputation |
title_full_unstemmed | Pay attention and you won’t lose it: a deep learning approach to sequence imputation |
title_short | Pay attention and you won’t lose it: a deep learning approach to sequence imputation |
title_sort | pay attention and you won t lose it a deep learning approach to sequence imputation |
topic | Deep learning Data restoration Sequence modelling Data loss Attentionmechanisms Automotive data |
url | https://peerj.com/articles/cs-210.pdf |
work_keys_str_mv | AT iliasucholutsky payattentionandyouwontloseitadeeplearningapproachtosequenceimputation AT apurvanarayan payattentionandyouwontloseitadeeplearningapproachtosequenceimputation AT matthiasschonlau payattentionandyouwontloseitadeeplearningapproachtosequenceimputation AT sebastianfischmeister payattentionandyouwontloseitadeeplearningapproachtosequenceimputation |