Difference between memory and prediction in linear recurrent networks
Recurrent networks are trained to memorize their input better, often in the hopes that such training will increase the ability of the network to predict. We show that networks designed to memorize input can be arbitrarily bad at prediction. We also find, for several types of inputs, that one-node ne...
Main Author: | |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
American Physical Society
2018
|
Online Access: | http://hdl.handle.net/1721.1/114553 |