Summary: | Neural networks trained with backpropagation achieved impressive results in the last decade. However, training such models requires sequential backward updates and non-local computations, making it challenging to parallelize at scale, implement in novel hardware, and is unlike how learning works in the brain. Neuroscience-inspired learning algorithms, such as predictive coding, have the potential to overcome these limitations and advance beyond current deep learning technologies. This potential, however, has only recently gained the attention of the community. As a consequence, the properties of these algorithms are still underexplored. In this thesis, I aim at filling this gap by exploring three interesting properties of predictive coding: First, there exists a variation of predictive coding that is equivalent to backpropagation in supervised learning, second, predictive coding is able to perform powerful associative memories, and third, it is able to train neural networks with graphs of any topology. The first result implies that predictive coding networks can be as accurate as standard ones when used to perform supervised learning tasks. The last two, that they are able to perform tasks with a robustness and flexibility that is lacking in standard deep learning models. I then conclude by discussing future directions of research, such as neural architecture search, novel hardware implementations, and implications in neuroscience. All in all, the results presented in this thesis are coherent with recent trends in the literature, which show that neuroscience-inspired learning methods may have interesting machine learning properties, and that they should be considered as as a valid alternative to backpropagation.
|