Proposing Multimodal Integration Model Using LSTM and Autoencoder

We propose an architecture of neural network that can learn and integrate sequential multimodal information using Long Short Term Memory. Our model consists of encoder and decoder LSTMs and multimodal autoencoder. For integrating sequential multimodal information, firstly, the encoder LSTM encodes a...

Full description

Bibliographic Details
Main Authors: Wataru Noguchi, Hiroyuki Iizuka, Masahito Yamamoto
Format: Article
Language:English
Published: European Alliance for Innovation (EAI) 2016-12-01
Series:EAI Endorsed Transactions on Security and Safety
Subjects:
Online Access:http://eudl.eu/doi/10.4108/eai.3-12-2015.2262505