Insights Into LSTM Fully Convolutional Networks for Time Series Classification

Long short-term memory fully convolutional neural networks (LSTM-FCNs) and Attention LSTM-FCN (ALSTM-FCN) have shown to achieve the state-of-the-art performance on the task of classifying time series signals on the old University of California-Riverside (UCR) time series repository. However, there h...

Full description

Bibliographic Details
Main Authors: Fazle Karim, Somshubra Majumdar, Houshang Darabi
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8713870/
Description
Summary:Long short-term memory fully convolutional neural networks (LSTM-FCNs) and Attention LSTM-FCN (ALSTM-FCN) have shown to achieve the state-of-the-art performance on the task of classifying time series signals on the old University of California-Riverside (UCR) time series repository. However, there has been no study on why LSTM-FCN and ALSTM-FCN perform well. In this paper, we perform a series of ablation tests (3627 experiments) on the LSTM-FCN and ALSTM-FCN to provide a better understanding of the model and each of its sub-modules. The results from the ablation tests on the ALSTM-FCN and LSTM-FCN show that the LSTM and the FCN blocks perform better when applied in a conjoined manner. Two z-normalizing techniques, z-normalizing each sample independently and z-normalizing the whole dataset, are compared using a Wilcoxson signed-rank test to show a statistical difference in performance. In addition, we provide an understanding of the impact dimension shuffle that has on LSTM-FCN by comparing its performance with LSTM-FCN when no dimension shuffle is applied. Finally, we demonstrate the performance of the LSTM-FCN when the LSTM block is replaced by a gated recurrent unit (GRU), basic neural network (RNN), and dense block.
ISSN:2169-3536