A Bayesian account of learning and generalising representations in the brain

<p>Without learning we would be limited to a set of preprogrammed behaviours. While that may be acceptable for flies, it does not provide the basis for adaptive or intelligent behaviours familiar to humans. Learning, then, is one of the crucial components of brain operation. Learning, however,...

Full description

Bibliographic Details
Main Author: Whittington, JCR
Other Authors: Bogacz, R
Format: Thesis
Language:English
Published: 2019
Subjects:
Description
Summary:<p>Without learning we would be limited to a set of preprogrammed behaviours. While that may be acceptable for flies, it does not provide the basis for adaptive or intelligent behaviours familiar to humans. Learning, then, is one of the crucial components of brain operation. Learning, however, takes time. Thus, the key to adaptive behaviour is learning to systematically generalise; that is, have learned knowledge that can be flexibly recombined to understand any world in front of you. This thesis attempts to make inroads on two questions - how can brain networks learn, and what are the principles behind representations of knowledge that allow generalisation. Though bound by a common framework of Bayesian thinking, this thesis considers the questions in two separate parts.</p> <p>In the first part of the thesis, we investigate algorithms the brain may use to update connection strengths. While learning attempts to optimise a global function of the brain state, each connection only has access to local information. This is in contrast to artificial networks, where global information is easily conveyed to each synapse via the back-propagation algorithm. We show that, contrary to decades old beliefs, an analogous algorithm to back-propagation could be implemented in the local dynamics and learning rules of brain networks. We show an exact equivalence between the two algorithms and demonstrate that they perform identically on a standard machine learning benchmark. These results are the first to show that an algorithm as efficient as those used in machine learning could be implemented in the brain.</p> <p>In the second part of the thesis, we investigate frameworks for learning and generalising neural representations. It is proposed that a cognitive map encoding the relationships between entities in the world supports flexible behaviour. This map is traditionally associated with the hippocampal formation, due to its beautiful representations mapping space. This cognitive map, though, seems at odds with the other well-characterised aspect of the hippocampus: relational memory. Here we unify spatial cognition and relational memory within the framework of generalising relational knowledge. Using this framework, we build a machine that learns and generalises knowledge in both spatial and non-spatial tasks, while also displaying representations that mirror those in the brain. Finally, we confirm model predictions in neural data. Together, these results provide a computational framework for a systematic organisation of knowledge spanning all domains of behaviour.</p>